Please take a look at the following java Servlet doGet() method:
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
{
response.setContentType("text/html;charset=utf-8");
OutputStreamWriter osw = new OutputStreamWriter(response.getOutputStream(),"UTF-8");
int j= 0;
while(j < 2)
{
String s = "";
int i = 0;
while(i<10000)
{
s = s + "a";
i++;
}
System.out.println(s.length());
osw.write(s,0,s.length());
j++;
}
osw.flush();
}
Using Tomcat as the Servlet container, the following HTTP response gets generated:
I'm aware of the fact that response.getOutputStream() gives you a reference to
a decorated version of the actual OutputStream of the socket.
Tomcat decorates it in order to handle persistent connections using a chunked-encoding of the HTTP response body.
I wonder why chunks are 2000 hex bytes (8192 dec bytes) it looks like Tomcat always buffers the bytes before sending them to the socket output stream, wich seems to me an inefficient way of doing the job.
In other words, when I make the call:
osw.write(s,0,s.length());
where s.lenght() > buffer_size
I would expect an HTTP chunk of size s.lenght() and not of the dimension of
the buffer Tomcat uses to handle HTTP chunked-encoding.
Hope this is clear.
Related
I have PDFs mounted on an external server. I have to access them in my Java servlet and push them to the clients browser. The PDF should get downloaded directly or it may open a 'SAVE or OPEN' dialog window.
This is what i am trying in my code but it could not do much.
URL url = new URL("http://www01/manuals/zseries.pdf");
ByteArrayOutputStream bais = new ByteArrayOutputStream();
InputStream in = url.openStream();
int FILE_CHUNK_SIZE = 1024 * 4;
byte[] chunk = new byte[FILE_CHUNK_SIZE];
int n =0;
while ( (n = in.read(chunk)) != -1 ) {
bais.write(chunk, 0, n);
}
I have tried many ways to do this but could not succeed. I welcome if you have any good method to do this!
When you read the data, you get it inside your program memory, which is on the server side. To get it to the user's browser, you have to also write everything that you have read.
Before you start writing, though, you should give some appropriate headers.
Indicate that you are sending over a PDF file, by setting the mime type
Set the content length.
Indicate that the file is intended for download rather than showing inside the browser.
To set the mime type, use
response.setContentType("application/pdf");
To set the content length, assuming it's the same content length that you get from the URL, use:
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.connect();
if ( connection.getResponseCode() == 200 ) {
int contentLength = connection.getContentLength();
response.setContentLength( contentLength );
To indicate that you want the file to be downloaded, use:
response.setHeader( "Content-Disposition", "attachment; filename=\"zseries.pdf\"";
(Take care to change the file name to whatever you want the user to see in the save dialog box)
Finally, get the input stream from the URLConnection you just opened, get the servlet's response output stream, and start reading from one and writing to the other:
InputStream pdfSource = connection.getInputStream();
OutputStream pdfTarget = response.getOutputStream();
int FILE_CHUNK_SIZE = 1024 * 4;
byte[] chunk = new byte[FILE_CHUNK_SIZE];
int n =0;
while ( (n = pdfSource.read(chunk)) != -1 ) {
pdfTarget.write(chunk, 0, n);
}
} // End of if
Remember to use try/catch around this, because most of these methods throw IOException, timeout exceptions etc., and to finally close both streams. Also remember to do something meaningful (like give an error output) in case the response was not 200.
You could transfer the byte array to the client, then use Itext to "stamp" the pdf in a new file. After that use java.awt.Desktop to lauch the file.
public static void lauchPdf(byte[] bytes, String fileName) throws DocumentException, IOException{
PdfReader reader = new PdfReader(bytes);
PdfStamper stamper = new PdfStamper(reader, new FileOutputStream(fileName));
stamper.close();
Desktop dt = Desktop.getDesktop();
dt.browse(getFileURI(fileName));
}
You don't need to push anything (hope you really don't, because actually you can't). From the perspective of the browser making the request, you could get the PDF from the database, generate it on the fly or read it from the filesystem (which is your case). So, let's say you have this in your HTML:
DOWNLOAD FILE
you need to register a servlet for /dl/* and implement the doGet(req, resp) like this:
public void doGet(
HttpServletRequest req
, HttpServletResponse resp
) throws IOException {
resp.setContentType("application/pdf");
response.setHeader("Content-Disposition",
"attachment; filename=\"" + suggestFilename(req) + "\"");
// Then copy the stream, for example using IOUtils.copy ...
// lookup the URL from the bits after /dl/*
URL url = getURLFromRequest(req);
InputStream in = url.openConnection().getInputStream();
IOUtils.copy(in, resp.getOutputStream());
fin.close();
}
IOUtils is from Apache Commons IO (or just write your own while loop)
General Use-Case
Imagine a client that is uploading large amounts of JSON. The Content-Type should remain application/json because that describes the actual data. Accept-Encoding and Transfer-Encoding seem to be for telling the server how it should format the response. It appears that responses use the Content-Encoding header explicitly for this purpose, but it is not a valid request header.
Is there something I am missing? Has anyone found an elegant solution?
Specific Use-Case
My use-case is that I have a mobile app that is generating large amounts of JSON (and some binary data in some cases but to a lesser extent) and compressing the requests saves a large amount of bandwidth. I am using Tomcat as my Servlet container. I am using Spring for its MVC annotations primarily just to abstract away some of the JEE stuff into a much cleaner, annotation-based interface. I also use Jackson for auto (de)serialization.
I also use nginx, but I am not sure if thats where I want the decompression to take place. The nginx nodes simply balance the requests which are then distributed through the data center. It would be just as nice to keep it compressed until it actually got to the node that was going to process it.
Thanks in advance,
John
EDIT:
The discussion between myself and #DaSourcerer was really helpful for those that are curious about the state of things at the time of writing this.
I ended up implementing a solution of my own. Note that this specifies the branch "ohmage-3.0", but it will soon be merged into the master branch. You might want to check there to see if I have made any updates/fixes.
https://github.com/ohmage/server/blob/ohmage-3.0/src/org/ohmage/servlet/filter/DecompressionFilter.java
It appears [Content-Encoding] is not a valid request header.
That is actually not quite true. As per RFC 2616, sec 14.11, Content-Encoding is an entity header which means it can be applied on the entities of both, http responses and requests. Through the powers of multipart MIME messages, even selected parts of a request (or response) can be compressed.
However, webserver support for compressed request bodies is rather slim. Apache supports it to a degree via the mod_deflate module. It's not entirely clear to me if nginx can handle compressed requests.
Because the original code is not available any more. In case someone come here need it.
I use "Content-Encoding: gzip" to identify the filter need to decompression or not.
Here's the codes.
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException
{
HttpServletRequest httpServletRequest = (HttpServletRequest) request;
String contentEncoding = httpServletRequest.getHeader("Content-Encoding");
if (contentEncoding != null && contentEncoding.indexOf("gzip") > -1)
{
try
{
final InputStream decompressStream = StreamHelper.decompressStream(httpServletRequest.getInputStream());
httpServletRequest = new HttpServletRequestWrapper(httpServletRequest)
{
#Override
public ServletInputStream getInputStream() throws IOException
{
return new DecompressServletInputStream(decompressStream);
}
#Override
public BufferedReader getReader() throws IOException
{
return new BufferedReader(new InputStreamReader(decompressStream));
}
};
}
catch (IOException e)
{
mLogger.error("error while handling the request", e);
}
}
chain.doFilter(httpServletRequest, response);
}
Simple ServletInputStream wrapper class
public static class DecompressServletInputStream extends ServletInputStream
{
private InputStream inputStream;
public DecompressServletInputStream(InputStream input)
{
inputStream = input;
}
#Override
public int read() throws IOException
{
return inputStream.read();
}
}
Decompression stream code
public class StreamHelper
{
/**
* Gzip magic number, fixed values in the beginning to identify the gzip
* format <br>
* http://www.gzip.org/zlib/rfc-gzip.html#file-format
*/
private static final byte GZIP_ID1 = 0x1f;
/**
* Gzip magic number, fixed values in the beginning to identify the gzip
* format <br>
* http://www.gzip.org/zlib/rfc-gzip.html#file-format
*/
private static final byte GZIP_ID2 = (byte) 0x8b;
/**
* Return decompression input stream if needed.
*
* #param input
* original stream
* #return decompression stream
* #throws IOException
* exception while reading the input
*/
public static InputStream decompressStream(InputStream input) throws IOException
{
PushbackInputStream pushbackInput = new PushbackInputStream(input, 2);
byte[] signature = new byte[2];
pushbackInput.read(signature);
pushbackInput.unread(signature);
if (signature[0] == GZIP_ID1 && signature[1] == GZIP_ID2)
{
return new GZIPInputStream(pushbackInput);
}
return pushbackInput;
}
}
Add to your header when you are sending:
JSON : "Accept-Encoding" : "gzip, deflate"
Client code :
HttpUriRequest request = new HttpGet(url);
request.addHeader("Accept-Encoding", "gzip");
#JulianReschke pointed out that there can be a case of:
"Content-Encoding" : "gzip, gzip"
so extended server code will be:
InputStream in = response.getEntity().getContent();
Header encodingHeader = response.getFirstHeader("Content-Encoding");
String gzip = "gzip";
if (encodingHeader != null) {
String encoding = encodingHeader.getValue().toLowerCase();
int firstGzip = encoding.indexOf(gzip);
if (firstGzip > -1) {
in = new GZIPInputStream(in);
int secondGzip = encoding.indexOf(gzip, firstGzip + gzip.length());
if (secondGzip > -1) {
in = new GZIPInputStream(in);
}
}
}
I suppose that nginx is used as load balancer or proxy, so you need to set tomcat to do decompression.
Add following attributes to the Connector in server.xml on Tomcat,
<Connector
compression="on"
compressionMinSize="2048"
compressableMimeType="text/html,application/json"
... />
Accepting gziped requests in tomcat is a different story. You'll have to put a filter in front of your servlets to enable request decompression. You can find more about that here.
I have an ipcamera that whenever multiple of users are connecting to it it becomes too slow.
I was thinking about getting the stream from the camera with my server and multiple of clients should be able to stream from the server instead of the poor ipcamera.
i set up a quick and dirty servlet just too see if it works :
#RequestMapping(value = "/", method = RequestMethod.GET, produces = "application/x-shockwave-flash")
public String getVideoStream(Locale locale, Model model, HttpServletRequest request, HttpServletResponse response) throws IOException {
logger.info("Start");
// An IPcamera stream example
URL url = new URL("http://www.earthcam.com/swf/ads5.swf");
URLConnection yc = url.openConnection();
OutputStream out = response.getOutputStream();
InputStream in = yc.getInputStream();
String mimeType = "application/x-shockwave-flash";
byte[] bytes = new byte[100000];
int bytesRead;
response.setContentType(mimeType);
while ((bytesRead = in.read(bytes)) != -1) {
out.write(bytes, 0, bytesRead);
}
logger.info("End");
I believe this might work, my problem right now is that :
bytesRead = in.read(bytes)
reads only 61894 bytes and that's it :( why is that happening? am i trying to get the stream wrong?
btw: i tried to do this with xuggler, but i had an error that compressed-SWF not supported.
thanks
Your code is working perfectly. I just fetched ads5.swf from your server and it is, indeed, 61894 bytes in length. The problem you're facing is that the SWF file is just the movie player. After being downloaded, the player then fetches the video stream from the server. By default (if this is some kind of turn-key streaming solution), it's probably trying to get the stream from the same server where the SWF comes from.
I have a java client and a few Tomcat servers - Web servers. I have a sequence of operations I have to perform on the same server.
What I have in mind is using the same tcp-session, using a chain of:
read, write, read, write... - on server side
write, read, write, read... - on client side
Problem - after a read, write on the tomcat server - the next read get a -1 or EOFException.
client code:
java.net.URL u = new URL("http", "127.0.0.1", 8080, "/Dyno/BasicServlet");
HttpUrlConnection huc = (HttpURLConnection)u.openConnection();
huc.setRequestMethod("POST");
huc.setDoOutput(true);
huc.connect();
os = huc.getOutputStream();
byte[] b = info();
os.write(b)
os.flush();
is = huc.getInputStream();
byte[] b2 = new byte[10];
is.read(b2);
byte[] b = info(b2);
os.write(b)
Server code:
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
ServletInputStream is = request.getInputStream();
ServletOutputStream os = response.getOutputStream();
byte[] clientMsg = new byte[10];
is.read(clientMsg);
serverMsg = respond(clientMsg);
os.write(serverMsg)
os.flush();
is.read(); //Here I get -1
My theory is that Tomcat is closing the stream.
Do you agree?
Anyway to get bypass this?
Thank you.
HTTP is request-response only.
But WebSockets allow for full duplex communications between client and server.
Apache Tomcat 7 has preliminary support for WebSockets.
I have a Servlet which is returning a csv file that is 'working' over HTTP in both internet explorer and firefox. When I execute the same Servlet over HTTPS only firefox continues to download the csv file over HTTPS. I don't think this is necessarily an Internet 6 or 7 issue described on MSDN :
The message is:
Internet Explorer cannot download
data.csv from mydomain.com Internet
Explorer was not able to open this
Internet site. The requested site is
either unavailable or cannot be found.
Please try again later.
Please note that the site is still 'up' after this message and you can continue to browse the site, its just the download of the CSV that prompts this message. I have been able to access similar files over https on IE from other j2ee applications so I believe it is our code. Should we not be closing the bufferedOutputStream?
UPDATE
whether to close or not to close the output stream:
I asked this question on the java posse forums and the discussion there is also insightful. In the end it seems that no container should rely on the 'client' (your servlet code in this case) to close this output stream. So if your failure to close the stream in your servlet causes a problem it is more a reflection on the poor implementation of your servlet container than your code. I sited the behavior of the IDEs and tutortials from Sun, Oracle and BEA and how they are also inconsistent in whether they close the stream or not.
About IE specific behavior: In our case a separate product 'Oracle Web Cache' was introducing the additional header values which impacts Internet explorer only because of the way IE implements the 'No Cache' requirement (see the MSDN article).
The code is:
public class DownloadServlet extends HttpServlet {
public void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException,
IOException {
ServletOutputStream out = null;
ByteArrayInputStream byteArrayInputStream = null;
BufferedOutputStream bufferedOutputStream = null;
try {
response.setContentType("text/csv");
String disposition = "attachment; fileName=data.csv";
response.setHeader("Content-Disposition", disposition);
out = response.getOutputStream();
byte[] blobData = dao.getCSV();
//setup the input as the blob to write out to the client
byteArrayInputStream = new ByteArrayInputStream(blobData);
bufferedOutputStream = new BufferedOutputStream(out);
int length = blobData.length;
response.setContentLength(length);
//byte[] buff = new byte[length];
byte[] buff = new byte[(1024 * 1024) * 2];
//now lets shove the data down
int bytesRead;
// Simple read/write loop.
while (-1 !=
(bytesRead = byteArrayInputStream.read(buff, 0, buff.length))) {
bufferedOutputStream.write(buff, 0, bytesRead);
}
out.flush();
out.close();
} catch (Exception e) {
System.err.println(e); throw e;
} finally {
if (out != null)
out.close();
if (byteArrayInputStream != null) {
byteArrayInputStream.close();
}
if (bufferedOutputStream != null) {
bufferedOutputStream.close();
}
}
}
I am really confused about your "from back through the breast into the head" write mechanism. Why not simple (the servlet output stream will be bufferend, thats container stuff):
byte[] csv = dao.getCSV();
response.setContentType("text/csv");
response.setHeader("Content-Disposition", "attachment; filename=data.csv"));
reponse.setContentLength(csv.length);
ServletOutputStream out = response.getOutputStream();
out.write(csv);
There should also be no need to flush the output stream nor to close.
The header content should not be parsed case sensitive by IE, but who knows: do not camelcase fileName. The next question is the encoding. CSV is text, so you should use getWriter() instead or getOutputStream() and set the content type to "text/csv; charset=UTF-8" for example. But the dao should provide the CSV as String instead of byte[].
The servlet code has nothing to d with HTTPS, so the protocol does not matter from the server side. You may test the servlet from localhost with HTTP i hope.
What about filters in your application? A filter may als set an HTTP header (or as footer) with cache-control for example.