HTTP request compression - java

General Use-Case
Imagine a client that is uploading large amounts of JSON. The Content-Type should remain application/json because that describes the actual data. Accept-Encoding and Transfer-Encoding seem to be for telling the server how it should format the response. It appears that responses use the Content-Encoding header explicitly for this purpose, but it is not a valid request header.
Is there something I am missing? Has anyone found an elegant solution?
Specific Use-Case
My use-case is that I have a mobile app that is generating large amounts of JSON (and some binary data in some cases but to a lesser extent) and compressing the requests saves a large amount of bandwidth. I am using Tomcat as my Servlet container. I am using Spring for its MVC annotations primarily just to abstract away some of the JEE stuff into a much cleaner, annotation-based interface. I also use Jackson for auto (de)serialization.
I also use nginx, but I am not sure if thats where I want the decompression to take place. The nginx nodes simply balance the requests which are then distributed through the data center. It would be just as nice to keep it compressed until it actually got to the node that was going to process it.
Thanks in advance,
John
EDIT:
The discussion between myself and #DaSourcerer was really helpful for those that are curious about the state of things at the time of writing this.
I ended up implementing a solution of my own. Note that this specifies the branch "ohmage-3.0", but it will soon be merged into the master branch. You might want to check there to see if I have made any updates/fixes.
https://github.com/ohmage/server/blob/ohmage-3.0/src/org/ohmage/servlet/filter/DecompressionFilter.java

It appears [Content-Encoding] is not a valid request header.
That is actually not quite true. As per RFC 2616, sec 14.11, Content-Encoding is an entity header which means it can be applied on the entities of both, http responses and requests. Through the powers of multipart MIME messages, even selected parts of a request (or response) can be compressed.
However, webserver support for compressed request bodies is rather slim. Apache supports it to a degree via the mod_deflate module. It's not entirely clear to me if nginx can handle compressed requests.

Because the original code is not available any more. In case someone come here need it.
I use "Content-Encoding: gzip" to identify the filter need to decompression or not.
Here's the codes.
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException
{
HttpServletRequest httpServletRequest = (HttpServletRequest) request;
String contentEncoding = httpServletRequest.getHeader("Content-Encoding");
if (contentEncoding != null && contentEncoding.indexOf("gzip") > -1)
{
try
{
final InputStream decompressStream = StreamHelper.decompressStream(httpServletRequest.getInputStream());
httpServletRequest = new HttpServletRequestWrapper(httpServletRequest)
{
#Override
public ServletInputStream getInputStream() throws IOException
{
return new DecompressServletInputStream(decompressStream);
}
#Override
public BufferedReader getReader() throws IOException
{
return new BufferedReader(new InputStreamReader(decompressStream));
}
};
}
catch (IOException e)
{
mLogger.error("error while handling the request", e);
}
}
chain.doFilter(httpServletRequest, response);
}
Simple ServletInputStream wrapper class
public static class DecompressServletInputStream extends ServletInputStream
{
private InputStream inputStream;
public DecompressServletInputStream(InputStream input)
{
inputStream = input;
}
#Override
public int read() throws IOException
{
return inputStream.read();
}
}
Decompression stream code
public class StreamHelper
{
/**
* Gzip magic number, fixed values in the beginning to identify the gzip
* format <br>
* http://www.gzip.org/zlib/rfc-gzip.html#file-format
*/
private static final byte GZIP_ID1 = 0x1f;
/**
* Gzip magic number, fixed values in the beginning to identify the gzip
* format <br>
* http://www.gzip.org/zlib/rfc-gzip.html#file-format
*/
private static final byte GZIP_ID2 = (byte) 0x8b;
/**
* Return decompression input stream if needed.
*
* #param input
* original stream
* #return decompression stream
* #throws IOException
* exception while reading the input
*/
public static InputStream decompressStream(InputStream input) throws IOException
{
PushbackInputStream pushbackInput = new PushbackInputStream(input, 2);
byte[] signature = new byte[2];
pushbackInput.read(signature);
pushbackInput.unread(signature);
if (signature[0] == GZIP_ID1 && signature[1] == GZIP_ID2)
{
return new GZIPInputStream(pushbackInput);
}
return pushbackInput;
}
}

Add to your header when you are sending:
JSON : "Accept-Encoding" : "gzip, deflate"
Client code :
HttpUriRequest request = new HttpGet(url);
request.addHeader("Accept-Encoding", "gzip");
#JulianReschke pointed out that there can be a case of:
"Content-Encoding" : "gzip, gzip"
so extended server code will be:
InputStream in = response.getEntity().getContent();
Header encodingHeader = response.getFirstHeader("Content-Encoding");
String gzip = "gzip";
if (encodingHeader != null) {
String encoding = encodingHeader.getValue().toLowerCase();
int firstGzip = encoding.indexOf(gzip);
if (firstGzip > -1) {
in = new GZIPInputStream(in);
int secondGzip = encoding.indexOf(gzip, firstGzip + gzip.length());
if (secondGzip > -1) {
in = new GZIPInputStream(in);
}
}
}
I suppose that nginx is used as load balancer or proxy, so you need to set tomcat to do decompression.
Add following attributes to the Connector in server.xml on Tomcat,
<Connector
compression="on"
compressionMinSize="2048"
compressableMimeType="text/html,application/json"
... />
Accepting gziped requests in tomcat is a different story. You'll have to put a filter in front of your servlets to enable request decompression. You can find more about that here.

Related

Java: What is the maximum size of a POST request body in an HTTPServer?

I'm using a simple Java HTTPServer instance to connect my program to a client application. In this architecture, the server is a very barebones layer used to serve HTML pages statically and deal with the body of POST requests.
The HTTPServer is then linked to an HTTPHandler, whose implementation looks as follows:
#Override
public void handle(HttpExchange exchange) throws IOException {
Headers h = exchange.getResponseHeaders();
h.add("Content-Type", getContentType(exchange));
if ("GET".equals(exchange.getRequestMethod())) {
File file = // Static file location
h.add("Access-Control-Allow-Origin", "*");
h.add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
h.add("Access-Control-Allow-Methods", "OPTIONS, GET, POST, PUT, DELETE");
h.add("Access-Control-Allow-Credentials", "true");
OutputStream out = exchange.getResponseBody();
exchange.sendResponseHeaders(200, file.length());
exchange.getResponseBody().write(Files.readAllBytes(file.toPath()));
exchange.close();
out.close();
} else if ("POST".equals(exchange.getRequestMethod())) {
String req = new String(exchange.getRequestBody().readAllBytes());
String response = "{}";
if (path.startsWith("api/input")) {
// deal with request body
}
exchange.sendResponseHeaders(200, response.length());
exchange.getResponseBody().write(response.getBytes(StandardCharsets.UTF_8));
exchange.close();
}
}
When sending a POST request with a longer body (roughly 3 MB), the exchange fails with an Error 413: "Request entity too large". Reducing the request size to, let's say, 300 KB seems to do the trick, so I am led to think that the error is genuine.
While I am familiar with the concept of server implementations limiting the request body size of POST requests (e.g. Express limiting the size to 100 kb unless otherwise specified), I am not able to find the corresponding parameter in a vanilla com.sun.net.httpserver.HTTPServer/HTTPHandler implementation - nor does documentation help me further in this regard.
Does anyone know the limit? Is there a way for the limit to be increased?

Jackson error "Illegal character... only regular white space allowed" when parsing JSON

I am trying to retrieve JSON data from a URL but get the following error:
Illegal character ((CTRL-CHAR, code 31)):
only regular white space (\r, \n,\t) is allowed between tokens
My code:
final URI uri = new URIBuilder(UrlConstants.SEARCH_URL)
.addParameter("keywords", searchTerm)
.addParameter("count", "50")
.build();
node = new ObjectMapper().readTree(new URL(uri.toString())); <<<<< THROWS THE ERROR
The url constructed is i.e https://www.example.org/api/search.json?keywords=iphone&count=50
What is going wrong here? And how can I parse this data successfully?
Imports:
import com.google.appengine.repackaged.org.codehaus.jackson.JsonNode;
import com.google.appengine.repackaged.org.codehaus.jackson.map.ObjectMapper;
import com.google.appengine.repackaged.org.codehaus.jackson.node.ArrayNode;
import org.apache.http.client.utils.URIBuilder;
example response
{
meta: {
indexAllowed: false
},
products: {
products: [
{
id: 1,
name: "Apple iPhone 6 16GB 4G LTE GSM Factory Unlocked"
},
{
id: 2,
name: "Apple iPhone 7 8GB 4G LTE GSM Factory Unlocked"
}
]
}
}
I got this same issue, and I found that it was caused by the Content-Encoding: gzip header. The client application (where the exception was being thrown) was not able to handle this content-encoding. FWIW the client application was using io.github.openfeign:feign-core:9.5.0, and this library appears to have some issues around compression (link).
You might try adding the header Accept-Encoding: identity to your request, however, not all web servers/web applications are configured properly, and some seem to disregard this header. See this question for more details about how to prevent gzipped content.
I had a similar issue. After some research, I found of that restTemplate uses the SimpleClientHttpRequestFactory which does not support gzip encoding. To enable gzip encoding for your response, you will need to set a new request factory for the rest template object - HttpComponentsClientHttpRequestFactory.
restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory());
The message should be pretty self-explanatory:
There is an illegal character (in this case character code 31, i.e. the control code "Unit Separator") in the JSON you are processing.
In other words, the data you are receiving is not proper JSON.
Background:
The JSON spec (RFC 7159) says:
JSON Grammar
A JSON text is a sequence of tokens. The set of tokens includes six
tructural characters, strings, numbers, and three literal names.
[...]
Insignificant whitespace is allowed before or after any of the
six structural characters.
ws = *(
%x20 / ; Space
%x09 / ; Horizontal tab
%x0A / ; Line feed or New line
%x0D ) ; Carriage return
In other words: JSON may contain whitespace between the tokens ("tokens" meaning the part of the JSON, i.e. lists, strings etc.), but "whitespace" is defined to only mean the characters Space, Tab, Line feed and Carriage return.
Your document contains something else (code 31) where only whitespace is allowed, hence is not valid JSON.
To parse this:
Unfortunately, the Jackson library you are using does not offer a way to parse this malformed data. To parse this successfully, you will have to filter the JSON before it is handled by Jackson.
You will probably have to retrieve the (pseudo-)JSON yourself from the REST service, using standard HTTP using, e.g. java.net.HttpUrlConnection. Then suitably filter out "bad" characters, and pass the resulting string to Jackson. How to do this exactly depends on how you use Jackson.
Feel free to ask a separate questions if you are having trouble :-).
I had the same problem. After setting Gzip it was fixed. Please refer my code
public String sendPostRequest(String req) throws Exception {
// Create connection
URL urlObject = new URL(mURL);
HttpURLConnection connection = (HttpURLConnection) urlObject.openConnection();
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-Type", "application/json");
connection.setRequestProperty("Content-Length", Integer.toString(req.getBytes().length));
connection.setRequestProperty("Content-Language", "en-US");
connection.setUseCaches(false);
connection.setDoOutput(true);
// Send request
DataOutputStream wr = new DataOutputStream(connection.getOutputStream());
wr.writeBytes(req);
wr.close();
//Response handling
InputStream responseBody = null;
if (isGzipResponse(connection)) {
responseBody = new GZIPInputStream(connection.getInputStream());
}else{
responseBody = connection.getInputStream();
}
convertStreamToString(responseBody);
return response.toString();
}
protected boolean isGzipResponse(HttpURLConnection con) {
String encodingHeader = con.getHeaderField("Content-Encoding");
return (encodingHeader != null && encodingHeader.toLowerCase().indexOf("gzip") != -1);
}
public void convertStreamToString(InputStream in) throws Exception {
if (in != null) {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buffer = new byte[4096];
int length = 0;
while ((length = in.read(buffer)) != -1) {
baos.write(buffer, 0, length);
}
response = new String(baos.toByteArray());
baos.close();
} else {
response = null;
}
}
I had the same issue with zalando logbook in my spring boot application, and after reading the answers in here carefully, I realized, that the response interceptor must be applied after whatever takes care for decompression:
#Configuration
public class RestTemplateConfig {
[....]
#Bean
public RestTemplate restTemplate() {
return new RestTemplateBuilder()
.requestFactory(new MyRequestFactorySupplier())
.build();
}
class MyRequestFactorySupplier implements Supplier<ClientHttpRequestFactory> {
#Override
public ClientHttpRequestFactory get() {
CloseableHttpClient client = HttpClientBuilder.create()
.addInterceptorFirst(logbookHttpRequestInterceptor)
// wrong: .addInterceptorFirst(logbookHttpResponseInterceptor)
.addInterceptorLast(logbookHttpResponseInterceptor)
.build();
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory =
new HttpComponentsClientHttpRequestFactory(client);
return clientHttpRequestFactory;
}
}
}
We had the same issue in out integration tests recently. We have a spring boot application and we use wiremock to mock a integrated microservice server. For one of the test get requests that we had implemented we started getting this error. We had to downgrade wiremock from 2.18.0 to 2.17.0 and it worked fine. Due to some bug the jackson parser and the that particular version of wiremock didn't work together. We didnt have time to figure out what actually the bug was in those libraries.
Those who use FeignClient, please refer to this answer spring-feign-not-compressing-response
Spring is not able to Decode the response on the fly, so you need to define a custom GZip Decoder.
Solved for me.

Download Large file from server using REST template Java Spring MVC

I have a REST service which sends me a large ISO file ,there are no issues in the REST service .
Now I have written a Web application which calls the rest service to get the file ,on the client(web app) side I receive a Out Of memory Exception.Below is my code
HttpHeaders headers = new HttpHeaders();//1 Line
headers.setAccept(Arrays.asList(MediaType.APPLICATION_OCTET_STREAM));//2 Line
headers.set("Content-Type","application/json");//3 Line
headers.set("Cookie", "session=abc");//4 Line
HttpEntity statusEntity=new HttpEntity(headers);//5 Line
String uri_status=new String("http://"+ip+":8080/pcap/file?fileName={name}");//6 Line
ResponseEntity<byte[]>resp_status=rt.exchange(uri_status, HttpMethod.GET, statusEntity, byte[].class,"File5.iso");//7 Line
I receive out of memory exception at 7 line ,I guess i will have to buffer and get in parts ,but dont know how can i get this file from the server ,the size of the file is around 500 to 700 MB .
Can anyone please assist .
Exception Stack:
org.springframework.web.util.NestedServletException: Handler processing failed; nested exception is java.lang.OutOfMemoryError: Java heap space
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:972)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:778)
javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
root cause
java.lang.OutOfMemoryError: Java heap space
java.util.Arrays.copyOf(Arrays.java:3236)
java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
org.springframework.util.FileCopyUtils.copy(FileCopyUtils.java:113)
org.springframework.util.FileCopyUtils.copyToByteArray(FileCopyUtils.java:164)
org.springframework.http.converter.ByteArrayHttpMessageConverter.readInternal(ByteArrayHttpMessageConverter.java:58)
org.springframework.http.converter.ByteArrayHttpMessageConverter.readInternal(ByteArrayHttpMessageConverter.java:1)
org.springframework.http.converter.AbstractHttpMessageConverter.read(AbstractHttpMessageConverter.java:153)
org.springframework.web.client.HttpMessageConverterExtractor.extractData(HttpMessageConverterExtractor.java:81)
org.springframework.web.client.RestTemplate$ResponseEntityResponseExtractor.extractData(RestTemplate.java:627)
org.springframework.web.client.RestTemplate$ResponseEntityResponseExtractor.extractData(RestTemplate.java:1)
org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:454)
org.springframework.web.client.RestTemplate.execute(RestTemplate.java:409)
org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:385)
com.pcap.webapp.HomeController.getPcapFile(HomeController.java:186)
My Server Side REST Service Code which is working fine is
#RequestMapping(value = URIConstansts.GET_FILE, produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE}, method = RequestMethod.GET)
public void getFile(#RequestParam(value="fileName", required=false) String fileName,HttpServletRequest request,HttpServletResponse response) throws IOException{
byte[] reportBytes = null;
File result=new File("/home/arpit/Documents/PCAP/dummyPath/"+fileName);
if(result.exists()){
InputStream inputStream = new FileInputStream("/home/arpit/Documents/PCAP/dummyPath/"+fileName);
String type=result.toURL().openConnection().guessContentTypeFromName(fileName);
response.setHeader("Content-Disposition", "attachment; filename=" + fileName);
response.setHeader("Content-Type",type);
reportBytes=new byte[100];//New change
OutputStream os=response.getOutputStream();//New change
int read=0;
while((read=inputStream.read(reportBytes))!=-1){
os.write(reportBytes,0,read);
}
os.flush();
os.close();
}
Here is how I do it. Based on hints from this Spring Jira issue.
RestTemplate restTemplate // = ...;
// Optional Accept header
RequestCallback requestCallback = request -> request.getHeaders()
.setAccept(Arrays.asList(MediaType.APPLICATION_OCTET_STREAM, MediaType.ALL));
// Streams the response instead of loading it all in memory
ResponseExtractor<Void> responseExtractor = response -> {
// Here I write the response to a file but do what you like
Path path = Paths.get("some/path");
Files.copy(response.getBody(), path);
return null;
};
restTemplate.execute(URI.create("www.something.com"), HttpMethod.GET, requestCallback, responseExtractor);
From the aforementioned Jira issue:
Note that you cannot simply return the InputStream from the extractor, because by the time the execute method returns, the underlying connection and stream are already closed.
Update for Spring 5
Spring 5 introduced the WebClient class which allows asynchronous (e.g. non-blocking) http requests. From the doc:
By comparison to the RestTemplate, the WebClient is:
non-blocking, reactive, and supports higher concurrency with less hardware resources.
provides a functional API that takes advantage of Java 8 lambdas.
supports both synchronous and asynchronous scenarios.
supports streaming up or down from a server.
To get WebClient in Spring Boot, you need this dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
For the moment, I'm sticking with RestTemplate because I don't want to pull in another dependency only to get access to WebClient.
As #bernie mentioned you can use WebClient to achieve this:
public Flux<DataBuffer> downloadFileUrl( ) throws IOException {
WebClient webClient = WebClient.create();
// Request service to get file data
return Flux<DataBuffer> fileDataStream = webClient.get()
.uri( this.fileUrl )
.accept( MediaType.APPLICATION_OCTET_STREAM )
.retrieve()
.bodyToFlux( DataBuffer.class );
}
#GetMapping( produces = MediaType.APPLICATION_OCTET_STREAM_VALUE )
public void downloadFile( HttpServletResponse response ) throws IOException
{
Flux<DataBuffer> dataStream = this.downloadFileUrl( );
// Streams the stream from response instead of loading it all in memory
DataBufferUtils.write( dataStream, response.getOutputStream() )
.map( DataBufferUtils::release )
.blockLast();
}
You can still use WebClient even if you don't have Reactive Server stack - Rossen Stoyanchev (a member of Spring Framework team) explains it quite well in the Guide to "Reactive" for Spring MVC Developers presentation. During this presentation, Rossen Stoyanchev mentioned that they thought about deprecating RestTemplate, but they have decided to postpone it after all, but it may still happen in the future!
The main disadvantage of using WebClient so far it's a quite steep learning curve (reactive programming), but I think there is no way to avoid in the future, so better to take a look on it sooner than latter.
This prevents loading the entire request into memory.
SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
requestFactory.setBufferRequestBody(false);
RestTemplate rest = new RestTemplate(requestFactory);
For java.lang.OutOfMemoryError: Java heap space can be solved adding more memory to the JVM:
-Xmxn Specifies the maximum size, in bytes, of the memory allocation pool. This value must a multiple of 1024 greater than 2 MB. Append the
letter k or K to indicate kilobytes, or m or M to indicate megabytes.
The default value is chosen at runtime based on system configuration.
For server deployments, -Xms and -Xmx are often set to the same value.
See Garbage Collector Ergonomics at
http://docs.oracle.com/javase/7/docs/technotes/guides/vm/gc-ergonomics.html
Examples:
-Xmx83886080
-Xmx81920k
-Xmx80m
Probably the problem you have is not strictly related to the request you are trying to execute (download large file) but the memory allocated for the process is not enough.
A better version of above correct answer could be the below code. This method will send download request to another application or service acting as actual source of truth for downloaded information.
public void download(HttpServletRequest req, HttpServletResponse res, String url)
throws ResourceAccessException, GenericException {
try {
logger.info("url::" + url);
if (restTemplate == null)
logger.info("******* rest template is null***********************");
RequestCallback requestCallback = request -> request.getHeaders()
.setAccept(Arrays.asList(MediaType.APPLICATION_OCTET_STREAM, MediaType.ALL));
// Streams the response instead of loading it all in memory
ResponseExtractor<ResponseEntity<InputStream>> responseExtractor = response -> {
String contentDisposition = response.getHeaders().getFirst("Content-Disposition");
if (contentDisposition != null) {
// Temporary location for files that will be downloaded from micro service and
// act as final source of download to user
String filePath = "/home/devuser/fileupload/download_temp/" + contentDisposition.split("=")[1];
Path path = Paths.get(filePath);
Files.copy(response.getBody(), path, StandardCopyOption.REPLACE_EXISTING);
// Create a new input stream from temporary location and use it for downloading
InputStream inputStream = new FileInputStream(filePath);
String type = req.getServletContext().getMimeType(filePath);
res.setHeader("Content-Disposition", "attachment; filename=" + contentDisposition.split("=")[1]);
res.setHeader("Content-Type", type);
byte[] outputBytes = new byte[100];
OutputStream os = res.getOutputStream();
int read = 0;
while ((read = inputStream.read(outputBytes)) != -1) {
os.write(outputBytes, 0, read);
}
os.flush();
os.close();
inputStream.close();
}
return null;
};
restTemplate.execute(url, HttpMethod.GET, requestCallback, responseExtractor);
} catch (Exception e) {
logger.info(e.toString());
throw e;
}
}
You should use multipart file attachment, so the file stream isn't load into memory.
In this example, I use a rest service implemented with Apache CXF.
...
import org.apache.cxf.jaxrs.ext.multipart.Attachment;
...
#Override
#Path("/put")
#Consumes("multipart/form-data")
#Produces({ "application/json" })
#POST
public SyncResponseDTO put( List<Attachment> attachments) {
SyncResponseDTO response = new SyncResponseDTO();
try {
for (Attachment attr : attachments) {
log.debug("get input filestream: " + new Date());
InputStream is = attr.getDataHandler().getInputStream();

HTTP chunked-encoding buffering Tomcat

Please take a look at the following java Servlet doGet() method:
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
{
response.setContentType("text/html;charset=utf-8");
OutputStreamWriter osw = new OutputStreamWriter(response.getOutputStream(),"UTF-8");
int j= 0;
while(j < 2)
{
String s = "";
int i = 0;
while(i<10000)
{
s = s + "a";
i++;
}
System.out.println(s.length());
osw.write(s,0,s.length());
j++;
}
osw.flush();
}
Using Tomcat as the Servlet container, the following HTTP response gets generated:
I'm aware of the fact that response.getOutputStream() gives you a reference to
a decorated version of the actual OutputStream of the socket.
Tomcat decorates it in order to handle persistent connections using a chunked-encoding of the HTTP response body.
I wonder why chunks are 2000 hex bytes (8192 dec bytes) it looks like Tomcat always buffers the bytes before sending them to the socket output stream, wich seems to me an inefficient way of doing the job.
In other words, when I make the call:
osw.write(s,0,s.length());
where s.lenght() > buffer_size
I would expect an HTTP chunk of size s.lenght() and not of the dimension of
the buffer Tomcat uses to handle HTTP chunked-encoding.
Hope this is clear.

Multiple lines in HTTP PUT Body (Inpustream) causes problems

I have a method that receives the inputstream via HTTP Put and converts it into byte[] and sends it to another method called verifysignature. I have been having a weird problem with it. The code is all right, but however the message digests don't match. After debugging, I found out it works fine if my inpustream has a single line of text but fails when there are multiple lines.
The request body of Http PUT looks like :
{
"Url":"http://live.dev:3000/access_tokens",
"AuthorizationUrl":"http://live.dev:3000/client_access_tokens",
"Cert":"test"
}
The method that responds to PUT request:
public #ResponseBody Map<String,String> createAuthorizationServer(HttpServletRequest request)throws IOException {
// This method converts inputstream to byte[]
byte[] inputStream = toBodyBytes(request.getInputStream());
X509Certificate signingCert = null;
//..... stuff
// Here I am using the byte array
signingCert = engine.verifySignature(signature,inputStream);
This works fine when the payload in request body is something in one line:
"test:;sample"
But fails when it has a multiple lines, like:
"test:;"
"sample"
Can someone please throw some light at this?
toBodyBytes method for your reference:
private static byte[] toBodyBytes(InputStream inputStream) throws IOException {
final int MAX_PAYLOAD_SIZE = 10240;
byte[] buf = new byte[MAX_PAYLOAD_SIZE];
// protect against OutOfMemoryError in case of misconfiguration (accidentally filtering uploads)
int bodySize = read(inputStream, buf, 0, buf.length);
checkState(bodySize < MAX_PAYLOAD_SIZE, "Looking for signature on upload payload?");
return copyOf(buf, bodySize);
}
Thank you!

Categories

Resources