We have a system where a client makes an HTTP GET request, the system does some processing on the backend, zips the results, and sends it to the client. Since the processing can take some time, we send this as a ZipOutputStream wrapping the response.getOutputStream().
However, when we have an exceptionally small amount of data in the first ZipEntry, and the second entry takes a long time, the browser the client is using times out. We've tried flushing the stream buffer, but no response seems to be sent to the browser until at least 1000 bytes have been written to the stream. Oddly, once the first 1000 bytes have been sent, subsequent flushes seem to work fine.
I tried stripping down the code to bare-bones to give an example:
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
try {
ZipOutputStream _zos = new ZipOutputStream( response.getOutputStream());
ZipEntry _ze = null;
long startTime = System.currentTimeMillis();
long _lByteCount = 0;
response.setContentType("application/zip");
while (_lByteCount < 2000) {
_ze = new ZipEntry("foo");
_zos.putNextEntry( _ze );
//writes 100 bytes and then waits 10 seconds
_lByteCount += StreamWriter.write(
new ByteArrayInputStream(DataGenerator.getOutput().toByteArray()),
_zos );
System.out.println("Zip: " + _lByteCount + " Time: " + ((System.currentTimeMillis() - startTime) / 1000));
//trying to flush
_zos.finish();
_zos.flush();
response.flushBuffer();
response.getOutputStream().flush();
}
} catch (Throwable e) {
e.printStackTrace();
}
}
I set my browser timeout to be about 20 seconds for easy reproduction. Despite writing the 100 bytes a couple of times, nothing is sent to the browser and the browser times out. If I expand the browser timeout, nothing gets sent until 1000 bytes have been written and then the browser pops up the "Would you like to save..." dialog. Again, after the initial 1000 bytes, each addition 100 bytes sends fine, rather than buffering to 1000 byte chunks.
If I set the max byte count in the while condition to 200 or so, it works fine, sending only 200 bytes.
What can I do to force the servlet to send back really small initial amounts of data?
It turns out there is a limit on the underlying Apache/Windows IP stack that buffers data from a stream in an attempt to be efficient. Since most people have the problem of too much data, not the problem of too little data, this is right most of the time. What we ended up doing was requiring the user to request enough data that we'd hit the 1000 byte limit before timing out. Sorry for taking so long to answer the question.
I know this is a really, really old question, but for the record, I wanted to post an answer that should be a fix all for the issue that you are experiencing.
The key is that you want to flush the response stream, not the zip stream. Because the ZIP stream cannot flush what is not yet ready to write. Your client, as you mentioned, is timing out because it is not receiving a response in a predetermined amount of time, but once it receives data, it is patient and will wait a very long time to download the file, thus the fix is easy, provided you flush the correct stream. I recommend the following:
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
try {
ZipOutputStream _zos = new ZipOutputStream( response.getOutputStream());
ZipEntry _ze = null;
long startTime = System.currentTimeMillis();
long _lByteCount = 0;
response.setContentType("application/zip");
// force an immediate response of the expected content
// so the client can begin the download process
response.flushBuffer();
while (_lByteCount < 2000) {
_ze = new ZipEntry("foo");
_zos.putNextEntry( _ze );
//writes 100 bytes and then waits 10 seconds
_lByteCount += StreamWriter.write(
new ByteArrayInputStream(DataGenerator.getOutput().toByteArray()),
_zos );
System.out.println("Zip: " + _lByteCount + " Time: " + ((System.currentTimeMillis() - startTime) / 1000));
//trying to flush
_zos.finish();
_zos.flush();
}
} catch (Throwable e) {
e.printStackTrace();
}
Now, what should happen here, is the header and response codes will be committed along with anything in the response buffer's OutputStream. This does not close the stream, so any additional writes to the stream are appended. The downside to doing it this way, is that you cannot know the content-length to assign to the header. The positive is that you are starting the download immediately, and not allowing the browser to timeout.
My guess is that the zip output stream doesn't actually write anything before beeing able to compress stuff. Huffmann algorithm used for zipping requires all data to be known before actually beeing able to compress anything. It can't start before everything is known basically.
Zipping might be a win if the amount of data is big, but I don't think you can achieve asynchronous reponse while zipping data.
I entirely can't reproduce your problem. Below is your code, slightly altered, running in an embedded Jetty server. I ran it in IntelliJ and requested http://localhost:8080 from Firefox. As expected, the "Save or Open" dialog popped up after 1 second. Selecting "save" and waiting for 20 seconds results in a zip file which can be opened and contains 20 separate entries, named foo<number> each containing a single line 100 characters wide and ending with <number>. This is on Windows 7 Premium 64 with JDK 1.6.0_26. Chrome acts the same way. IE, on the other hand, seems to normally wait for 5 seconds (500 bytes), though once it showed the dialog immediately, and another time it seemed to wait for 9 or 10 seconds. Try it in different browsers:
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import javax.servlet.ServletException;
import javax.servlet.http.*;
import java.io.IOException;
import java.util.zip.ZipEntry;
import java.util.zip.ZipOutputStream;
public class ZippingAndStreamingServlet {
public static void main(String[] args) throws Exception {
Server server = new Server(8080);
ServletContextHandler context = new ServletContextHandler(ServletContextHandler.SESSIONS);
context.setContextPath("/");
server.setHandler(context);
context.addServlet(new ServletHolder(new BufferingServlet()), "/*");
server.start();
System.out.println("Listening on 8080");
server.join();
}
static class BufferingServlet extends HttpServlet {
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
ZipOutputStream _zos = new ZipOutputStream(response.getOutputStream());
ZipEntry _ze;
long startTime = System.currentTimeMillis();
long _lByteCount = 0;
int count = 1;
response.setContentType("application/zip");
response.setHeader("Content-Disposition", "attachment; filename=my.zip");
while (_lByteCount < 2000) {
_ze = new ZipEntry("foo" + count);
_zos.putNextEntry(_ze);
byte[] bytes = String.format("%100d", count++).getBytes();
System.out.println("Sending " + bytes.length + " bytes");
_zos.write(bytes);
_lByteCount += bytes.length;
sleep(1000);
System.out.println("Zip: " + _lByteCount + " Time: " + ((System.currentTimeMillis() - startTime) / 1000));
_zos.flush();
}
_zos.close();
}
private void sleep(int millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
throw new IllegalStateException("Unexpected interrupt!", e);
}
}
}
}
You could be getting screwed by the Java API.
Looking through the JavaDocs of the various OutputStream family of classes (OutputStream, ServletOutputStream, FilterOutputStream, and ZipOutputStream) , they either mention that they rely on the underlying stream for flush() or they declare that flush() doesn't do anything (OutputStream).
ZipOutputStream inherits flush() and write() from FilterOutputStream.
From the FilterOutputStream JavaDoc:
The flush method of FilterOutputStream calls the flush method of its
underlying output stream.
In the case of ZipOutputStream, it is being wrapped around the stream returned from ServletResponse.getOutputStream() which is a ServletOutputStream. It turns out that ServletOutputStream doesn't implement flush() either, it inherits it from OutputStream which specifically mentions in its JavaDoc:
flush public void flush()
throws IOExceptionFlushes
this output stream and forces any
buffered output bytes to be written out. The general contract of flush
is that calling it is an indication that, if any bytes previously
written have been buffered by the implementation of the output stream,
such bytes should immediately be written to their intended
destination. If the intended destination of this stream is an
abstraction provided by the underlying operating system, for example a
file, then flushing the stream guarantees only that bytes previously
written to the stream are passed to the operating system for writing;
it does not guarantee that they are actually written to a physical
device such as a disk drive.
**The flush method of OutputStream does nothing.**
Maybe this is a special case, I don't know. I do know that flush() has been around a long time and it is unlikely that no one has noticed a hole in the functionality there.
It makes me wonder if there is an operating system component to the stream buffering that could be configured to remove the 1k buffer effect.
A related question has a similiar issue but was working directly with a file instead of from a Stream abstraction from Java and this answer points to the MSDN articles involved regarding file buffering and file caching.
A similar scenario was listed in the bug database.
Summary
The Java IO library relies on the OS implementation for Streams. If the OS has caching turned on, Java code may not be able to force a different behavior. In the case of Windows you have to open the file and send non-standard parameters to allow for write-through-cache or no-buffereing functionality. I doubt the Java SDK provides such OS-specific options since they are trying to create platform-generic APIs.
The issue is that by default each servlet implementation buffers the data whereas SSE and other custom requirements might/will need data immediately.
The solution is to do the following:
response.setBufferSize(1) // or some similar small number for such servlets.
This will ensure that the data is written out earlier (with the resultant performance loss)
Related
I have a servlet, mapped to an URL, which does a long task and outputs some data while it's working.
What I want to do is to call this url and see output in real-time.
Let's take this as an example:
package com.tasks;
public class LongTaskWithOutput extends HttpServlet {
private static final long serialVersionUID = 2945022862538743411L;
#Override
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
response.addHeader("Content-Type", "text/plain");
PrintStream out = new PrintStream(response.getOutputStream(), true);
for(int i=0; i<10;i++) {
out.println("# " + i);
out.flush();
try {
Thread.sleep(1000);
} catch(Exception e){}
}
}
}
With the following in web.xml:
...
<servlet>
<servlet-name>LongTaskServlet</servlet-name>
<servlet-class>com.tasks.LongTaskWithOutput</servlet-class>
<description>Long Task Servlet</description>
</servlet>
<servlet-mapping>
<servlet-name>LongTaskServlet</servlet-name>
<url-pattern>/longTask</url-pattern>
</servlet-mapping>
...
What happens
If I browse localhost/myApp/longTask, the browser makes me wait 10 seconds, then prints out all text at once.
What should happen
The text should be sent to the browser as soon as it's written to the output stream, and the browser should render one line every second.
As you can see, I already put an out.flush() to be sure that the stream flushes every second, but it still doesn't work.
I also tried with response.flushBuffer(), but I had the same result.
Is there a way to achieve this?
Update
As #MadConan suggested, I tried to use the output stream directly:
OutputStream out = response.getOutputStream();
for(int i=0; i<10;i++) {
out.write(("# " + i + "\n").getBytes());
out.flush();
try {
Thread.sleep(1000);
} catch(Exception e){}
}
The result, unfortunately, is still the same.
This is an upstream issue. The browser is not necessarily going to display that data as it receives it. It may wait until the request is complete. You might have additional chunking if you are going through a proxy. If you snoop on the network traffic, I bet it will go through as expected.
You are flushing the content to the response stream, but your response is not committed to the client. Understand it this way - You are giving some things to be somebody, to take and leave the room and hand over to somebody else. Until the person leaves the room and hands it over, your package will not be delivered.
For your requirement, you can keep on pushing the data to some global space and then have a PULL mechanism from client to read that space every X seconds and display the content to client. There is also option of PUSH mechanism to do the same, it depends on your project whether to use PUSH or PULL.
This basically means that in one request you can cannot update client and return back to server and do same over and over again. Request dies once response is committed to the client. Then there should be another PULL request from client or PUSH from server.
I am writing some java TCP/IP networking code ( client - server ) in which I have to deal with scenarios where the sends are much faster than the receives , thus blocking the send operations at one end. ( because the send and recv buffers fill up ). In order to design my code , I wanted to first play around these kind of situations first and see how the client and servers behave under varying load. But I am not able to set the parameters appropriately for acheiving this back pressure. I tried setting Socket.setSendBufferSize(int size) and Socket.setReceiveBufferSize(int size) to small values - hoping that would fill up soon, but I can see that send operation completes without waiting for the client to consume enough data already written. ( which means that the small send and recv buffer size has no effect )
Another approach I took is to use Netty , and set ServerBootstrap.setOption("child.sendBufferSize", 256);, but even this is of not much use. Can anyone help me understand what I am doing wrong /
The buffers have an OS dependent minimium size, this is often around 8 KB.
public static void main(String... args) throws IOException, InterruptedException {
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.bind(new InetSocketAddress(0)); // open on a random port
InetSocketAddress remote = new InetSocketAddress("localhost", ssc.socket().getLocalPort());
SocketChannel sc = SocketChannel.open(remote);
configure(sc);
SocketChannel accept = ssc.accept();
configure(accept);
ByteBuffer bb = ByteBuffer.allocateDirect(16 * 1024 * 1024);
// write as much as you can
while (sc.write(bb) > 0)
Thread.sleep(1);
System.out.println("The socket write wrote " + bb.position() + " bytes.");
}
private static void configure(SocketChannel socketChannel) throws IOException {
socketChannel.configureBlocking(false);
socketChannel.socket().setSendBufferSize(8);
socketChannel.socket().setReceiveBufferSize(8);
}
on my machine prints
The socket write wrote 32768 bytes.
This is the sum of the send and receive buffers, but I suspect they are both 16 KB
I think Channel.setReadable is what you need. setReadable tell netty temporary pause to read data from system socket in buffer, when the buffer is full, the other end will have to wait.
I write a client-server application which will be sending an .xml file from the client to the server. I have a problem with sending large data. I notice that the server can get at most 1460 bytes. When I send a file with more than 1460 bytes the server gets only first 1460 bytes and nothng more. In effect I get uncompleted file. Here is my code:
client send:
public void sendToServer(File file) throws Exception
{
OutputStream output = sk.getOutputStream();
FileInputStream fileInputStream = new FileInputStream(file);
byte[] buffer = new byte[1024*1024];
int bytesRead = 0;
while((bytesRead = fileInputStream.read(buffer))>0)
{
output.write(buffer,0,bytesRead);
}
fileInputStream.close();
}
server get:
public File getFile(String name) throws Exception
{
File file=null;
InputStream input = sk.getInputStream();
file = new File("C://protokolPliki/" + name);
FileOutputStream out = new FileOutputStream(file);
byte[] buffer = new byte[1024*1024];
int bytesReceived = 0;
while((bytesReceived = input.read(buffer))>0) {
out.write(buffer,0,bytesReceived);
System.out.println(bytesReceived);
break;
}
return file;
}
Do anyone know what is wrong with this code? Thanks for any help.
EDIT:
Nothing help :(. I google about that and I think its may connected with TCP MSS with is equal 1460 bytes.
Make sure you call flush() on the streams.
A passerby asks: isn't close() enough?
You linked to the docs for Writer, and the info. on the close() method states..
Closes the stream, flushing it first. ..
So you are partly right, OTOH, the OP is clearly using an OutputStream and the docs for close() state:
Closes this output stream and releases any system resources associated with this stream. The general contract of close is that it closes the output stream. A closed stream cannot perform output operations and cannot be reopened.
The close method of OutputStream does nothing.
(Emphasis mine.)
So to sum up. No, calling close() on a plain OutputStream will have no effect, and might as well be removed by the compiler.
Although not relate to your question, the API document said FileInputStream.read returns -1 for end of file. You should use >=0 for the while loop.
The MTU (Maximum Transmission Unit) for Ethernet is around 1500 bytes. Consider sending the file in chunks (i.e. one line at a time or 1024 bytes at a time).
See if using 1024 instead of 1024 * 1024 for the byte buffer solves your problem.
In the code executed on the server side, there is a break instruction in the while loop. Therefore the code in the loop will only get executed once. Remove the break instruction and the code should work just fine.
Im working on Instant messenger using java 1.6. IM uses multithreading - main thread, receiving, and ping. For tcp/ip communication I used SocketChannel. And it seems there is a problem with receiving bigger packages from server. Server instead of one sends a couple of packages and thats where the problem begins. Every first 8 bytes is telling what is the type of package and how big it is. This is how I managed reading:
public void run(){
while(true){
try{
Headbuffer.clear();
bytes = readChannel.read(Headbuffer); //ReadableByteChannel
Headbuffer.flip();
if(bytes != -1){
int head = Headbuffer.getInt();
int size = Headbuffer.getInt();
System.out.println("received pkg: 0x" + Integer.toHexString(head)+" with size "+ size+" bytes);
switch(head){
case incoming.Pkg1: ReadWelcome(); break;
case incoming.Pkg2: ReadLoginFail();break;
case incoming.Pkg3: ReadLoginOk();break;
case incoming.Pkg4: ReadUserList();break;
case incoming.Pkg5: ReadUserData();break;
case incoming.Pkg6: ReadMessage();break;
case incoming.Pkg7: ReadTypingNotify();break;
case incoming.Pkg8: ReadListStatus();break;
case incoming.Pkg9: ChangeStatus();break;
}
}
}catch(Exception e){
e.printStackTrace();
}
}
}
And during the tests everything was fine until i logged on my account and import my buddylist. I send request to server for statuses and he send me back about 10 out of 80 contacts. So I came up with something like this:
public synchronized void readInStatus(ByteBuffer headBuffer){
byteArray.add(headBuffer); //Store every buffer in ArrayList
int buddies = MainController.controler.getContacts().getSize();
while(buddies>0){
readStuff();
readDescription();
--buddies;
}
}
and each readStuff() and readDescription() are checking each parameter size with remaining bytes in the buffer:
if(byteArray.get(current).remaining() >= 4){
uin = byteArray.get(current).getInt();
}else{
byteArray.add(Receiver.receiver.read());
current = current +1;
uin = byteArray.get(current).getInt();
}
and Receiver.receiver.read() is:
public ByteBuffer read(){
try {
ByteBuffer bb = ByteBuffer.allocate(40000);
bb.order(ByteOrder.LITTLE_ENDIAN);
bytes = readChannel.read(bb);
bb.flip();
return bb;
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
So application is lunched, logged and then sends contacts. Server send me back just a piece of my list. But in the method readInStatus(ByteBuffer headBuffer) I try to force the rest of the list. And now the fun part - after some time it gets to the Receiver.receiver.read() and on bytes = readChannel.read(bb) it just stops and I dont know why , no errors no nothing even after some time and Im out of the ideas. Im fighting with this whole week and i dont get anywhere near the solution. I will appreciate any suggestions. Thanks.
Thanks for response. Yes, I'm using blocking SocketChannel, I tried non-blocking but it goes wild and out of control so I skipped the idea. About the bytes I expect - this is kind of weird, because its giving me size only once in head but its size of the first part not the whole package, other parts is not containing header bytes at all. I can't predict how much bytes it would be, the reason is - descriptions with 255 bytes capacity. This is exactly why I've created variable buddies in : public synchronized void readInStatus(ByteBuffer headBuffer)
wich is basically length of my buddy list and before reading each field I'm checking if there is enough bytes left if its not, I do read(). But last field before description is integer with the length of the incoming description. But its impossible to determine how long package is, until some processing is done. #robert do you think I should try again switching to non-blocking SocketChannel in that situation ?
The problem is most likely that you are sending fewer bytes than you are trying to read. You might have missed writing something, written things in the wrong order, misread a size field or something like that.
I think I'd attack this problem by adding tracing code to count and log the number of bytes read and written, notional packect sizes and so on. Then run, and compare the traces to see where things start to get out of sync.
If you are using a blocking SocketChannel, then read will block until the buffer is filled or the server delivers end of stream. For a server with connection keep-alive, the server does not send end of stream - it will simply stop sending data, and the read will hang indefinitely or until timeout.
You could either:
(i) try using a non-blocking SocketChannel, repeatedly reading until the read delivers 0 bytes (but beware 0 bytes does not necessarily mean end of stream - it could mean an interruption) or
(ii) if you have to use the blocking version, and you know how many bytes you were expecting from the server e.g. from a header, when the number of bytes left to read is less than buffer.capacity(), move position and/or limit on the buffer so as to leave only the required space in the buffer before the read. I am working this solution now. If it works for you, please let me know!
So far as I can work out, if you have to use a blocking SocketChannel and you do not know how many bytes you are expecting, and the server does not send end of stream, there is no solution.
I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment.
The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server.
These files can only be served out to authenticated users over HTTPS which is why I am serving them through a Servlet instead of just sticking them in Apache.
The code I am using is (some fluff removed around this):
resp.setHeader("Content-length", "" + fileLength);
resp.setContentType("application/vnd.ms-excel");
resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\"");
FileInputStream inputStream = null;
try
{
inputStream = new FileInputStream(path);
byte[] buffer = new byte[1024];
int bytesRead = 0;
do
{
bytesRead = inputStream.read(buffer, offset, buffer.length);
resp.getOutputStream().write(buffer, 0, bytesRead);
}
while (bytesRead == buffer.length);
resp.getOutputStream().flush();
}
finally
{
if(inputStream != null)
inputStream.close();
}
The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completely the memory usage doesn't appear to be a problem.
What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing!
I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the OutputStream once per iteration of the loop and after the loop, which didn't help.
The average decent servletcontainer itself flushes the stream by default every ~2KB. You should really not have the need to explicitly call flush() on the OutputStream of the HttpServletResponse at intervals when sequentially streaming data from the one and same source. In for example Tomcat (and Websphere!) this is configureable as bufferSize attribute of the HTTP connector.
The average decent servletcontainer also just streams the data in chunks if the content length is unknown beforehand (as per the Servlet API specification!) and if the client supports HTTP 1.1.
The problem symptoms at least indicate that the servletcontainer is buffering the entire stream in memory before flushing. This can mean that the content length header is not set and/or the servletcontainer does not support chunked encoding and/or the client side does not support chunked encoding (i.e. it is using HTTP 1.0).
To fix the one or other, just set the content length beforehand:
response.setContentLengthLong(new File(path).length());
Or when you're not on Servlet 3.1 yet:
response.setHeader("Content-Length", String.valueOf(new File(path).length()));
Does flush work on the output stream.
Really I wanted to comment that you should use the three-arg form of write as the buffer is not necessarily fully read (particularly at the end of the file(!)). Also a try/finally would be in order unless you want you server to die unexpectedly.
I have used a class that wraps the outputstream to make it reusable in other contexts. It has worked well for me in getting data to the browser faster, but I haven't looked at the memory implications. (please pardon my antiquated m_ variable naming)
import java.io.IOException;
import java.io.OutputStream;
public class AutoFlushOutputStream extends OutputStream {
protected long m_count = 0;
protected long m_limit = 4096;
protected OutputStream m_out;
public AutoFlushOutputStream(OutputStream out) {
m_out = out;
}
public AutoFlushOutputStream(OutputStream out, long limit) {
m_out = out;
m_limit = limit;
}
public void write(int b) throws IOException {
if (m_out != null) {
m_out.write(b);
m_count++;
if (m_limit > 0 && m_count >= m_limit) {
m_out.flush();
m_count = 0;
}
}
}
}
I'm also not sure if flush() on ServletOutputStream works in this case, but ServletResponse.flushBuffer() should send the response to the client (at least per 2.3 servlet spec).
ServletResponse.setBufferSize() sounds promising, too.
So, following your scenario, shouldn't you been flush(ing) inside that while loop (on every iteration), instead of outside of it? I would try that, with a bit larger buffer though.
Kevin's class should close the m_out field if it's not null in the close() operator, we don't want to leak things, do we?
As well as the ServletOutputStream.flush() operator, the HttpServletResponse.flushBuffer() operation may also flush the buffers. However, it appears to be an implementation specific detail as to whether or not these operations have any effect, or whether http content length support is interfering. Remember, specifying content-length is an option on HTTP 1.0, so things should just stream out if you flush things. But I don't see that
The while condition does not work, you need to check the -1 before using it. And please use a temporary variable for the output stream, its nicer to read and it safes calling the getOutputStream() repeadably.
OutputStream outStream = resp.getOutputStream();
while(true) {
int bytesRead = inputStream.read(buffer);
if (bytesRead < 0)
break;
outStream.write(buffer, 0, bytesRead);
}
inputStream.close();
out.close();
unrelated to your memory problems, the while loop should be:
while(bytesRead > 0);
your code has an infinite loop.
do
{
bytesRead = inputStream.read(buffer, offset, buffer.length);
resp.getOutputStream().write(buffer, 0, bytesRead);
}
while (bytesRead == buffer.length);
offset has the same value thoughout the loop, so if initially offset = 0, it will remain so in every iteration which will cause infinite-loop and which will leads to OOM error.
Ibm websphere application server uses asynchronous data transfer for servlets by default. That means that it buffers response. If you have problems with large data and OutOfMemory exceptions, try changing settings on WAS to use synchronous mode.
Setting the WebSphere Application Server WebContainer to synchronous mode
You must also take care of loading chunks and flush them.
Sample for loading from large file.
ServletOutputStream os = response.getOutputStream();
FileInputStream fis = new FileInputStream(file);
try {
int buffSize = 1024;
byte[] buffer = new byte[buffSize];
int len;
while ((len = fis.read(buffer)) != -1) {
os.write(buffer, 0, len);
os.flush();
response.flushBuffer();
}
} finally {
os.close();
}