I am writing a custom proxy as a web service client for our main application, which uses REST web services. For security reasons, I am trying to use a servlet on the client side as a proxy to retrieve a PDF from the server side and then display that in the application web browser through the client app.
As the heart of this, I have this piece of code:
protected void copy(HttpResponse fromResponse, HttpServletResponse toResponse)
throws IOException{
HttpEntity entity = fromResponse.getEntity();
for(Header header:fromResponse.getAllHeaders()){
toResponse.setHeader(header.getName(), header.getValue());
}
BufferedInputStream inputStream = new BufferedInputStream(entity.getContent());
BufferedOutputStream outputStream = new BufferedOutputStream(toResponse.getOutputStream());
int oneByte;
int byteCount = 0;
while((oneByte = inputStream.read()) >= 0){
outputStream.write(oneByte);
++byteCount;
}
log.debug("Bytes copied:" + byteCount);
which should copy the PDF from the returned output stream to the current output stream and then return it.
When I run it, though, I get an error from Adobe Reader saying the file is damaged and could not be repaired. When I run the URL directly the file is fine, so it has to be something in the handoff. The byteCount is equal to the PDF file size.
Does anyone have an idea what the problem is?
By doing
while((inputStream.read(buffer)) >= 0){
outputStream.write(buffer);
}
you will always write the full length of buffer, regardless of its effective content length as write can only look at the buffer's size to determine what to write.
int count;
while(((count = inputStream.read(buffer))) >= 0){
outputStream.write(buffer,0,count);
}
should take care of that problem.
I closed outputStream after writing to it and it works fine.
I didn't think you were supposed to do that?
Related
I have to do a file transfer, in my case a pdf, through socket in java for my homework. Usually I requested text and got text back, but this time I have to send a file through socket. In my investigation I discovered that file transfers are made with Fileinput(output)streams. My problem is that the request to the server has to look something like this:
File file = new File(pathToFile);
Pirntwriter out = new PrintWriter(Socket s.getOutputStream());
Outputstream outFile = s.getOutputStream();
int count
out.write("user file\r\n"
+ file.getName()+"\r\n"
+ file.length()+"\r\n"
+ "body\r\n");
// send file but im not sure how
byte[] buffer = new buffer with size of file.length()
while ((count = in.read(buffer)) > 0){
outFile.write(buffer, 0, count);
}
out.flush
outFile.flush
Unfortunately this doesn't work for me. In this way the server counts the requests as two different outputs. Is there a way to combine both Outputstreams or write the request in one single Outputstream?
I have PDFs mounted on an external server. I have to access them in my Java servlet and push them to the clients browser. The PDF should get downloaded directly or it may open a 'SAVE or OPEN' dialog window.
This is what i am trying in my code but it could not do much.
URL url = new URL("http://www01/manuals/zseries.pdf");
ByteArrayOutputStream bais = new ByteArrayOutputStream();
InputStream in = url.openStream();
int FILE_CHUNK_SIZE = 1024 * 4;
byte[] chunk = new byte[FILE_CHUNK_SIZE];
int n =0;
while ( (n = in.read(chunk)) != -1 ) {
bais.write(chunk, 0, n);
}
I have tried many ways to do this but could not succeed. I welcome if you have any good method to do this!
When you read the data, you get it inside your program memory, which is on the server side. To get it to the user's browser, you have to also write everything that you have read.
Before you start writing, though, you should give some appropriate headers.
Indicate that you are sending over a PDF file, by setting the mime type
Set the content length.
Indicate that the file is intended for download rather than showing inside the browser.
To set the mime type, use
response.setContentType("application/pdf");
To set the content length, assuming it's the same content length that you get from the URL, use:
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.connect();
if ( connection.getResponseCode() == 200 ) {
int contentLength = connection.getContentLength();
response.setContentLength( contentLength );
To indicate that you want the file to be downloaded, use:
response.setHeader( "Content-Disposition", "attachment; filename=\"zseries.pdf\"";
(Take care to change the file name to whatever you want the user to see in the save dialog box)
Finally, get the input stream from the URLConnection you just opened, get the servlet's response output stream, and start reading from one and writing to the other:
InputStream pdfSource = connection.getInputStream();
OutputStream pdfTarget = response.getOutputStream();
int FILE_CHUNK_SIZE = 1024 * 4;
byte[] chunk = new byte[FILE_CHUNK_SIZE];
int n =0;
while ( (n = pdfSource.read(chunk)) != -1 ) {
pdfTarget.write(chunk, 0, n);
}
} // End of if
Remember to use try/catch around this, because most of these methods throw IOException, timeout exceptions etc., and to finally close both streams. Also remember to do something meaningful (like give an error output) in case the response was not 200.
You could transfer the byte array to the client, then use Itext to "stamp" the pdf in a new file. After that use java.awt.Desktop to lauch the file.
public static void lauchPdf(byte[] bytes, String fileName) throws DocumentException, IOException{
PdfReader reader = new PdfReader(bytes);
PdfStamper stamper = new PdfStamper(reader, new FileOutputStream(fileName));
stamper.close();
Desktop dt = Desktop.getDesktop();
dt.browse(getFileURI(fileName));
}
You don't need to push anything (hope you really don't, because actually you can't). From the perspective of the browser making the request, you could get the PDF from the database, generate it on the fly or read it from the filesystem (which is your case). So, let's say you have this in your HTML:
DOWNLOAD FILE
you need to register a servlet for /dl/* and implement the doGet(req, resp) like this:
public void doGet(
HttpServletRequest req
, HttpServletResponse resp
) throws IOException {
resp.setContentType("application/pdf");
response.setHeader("Content-Disposition",
"attachment; filename=\"" + suggestFilename(req) + "\"");
// Then copy the stream, for example using IOUtils.copy ...
// lookup the URL from the bits after /dl/*
URL url = getURLFromRequest(req);
InputStream in = url.openConnection().getInputStream();
IOUtils.copy(in, resp.getOutputStream());
fin.close();
}
IOUtils is from Apache Commons IO (or just write your own while loop)
I have an ipcamera that whenever multiple of users are connecting to it it becomes too slow.
I was thinking about getting the stream from the camera with my server and multiple of clients should be able to stream from the server instead of the poor ipcamera.
i set up a quick and dirty servlet just too see if it works :
#RequestMapping(value = "/", method = RequestMethod.GET, produces = "application/x-shockwave-flash")
public String getVideoStream(Locale locale, Model model, HttpServletRequest request, HttpServletResponse response) throws IOException {
logger.info("Start");
// An IPcamera stream example
URL url = new URL("http://www.earthcam.com/swf/ads5.swf");
URLConnection yc = url.openConnection();
OutputStream out = response.getOutputStream();
InputStream in = yc.getInputStream();
String mimeType = "application/x-shockwave-flash";
byte[] bytes = new byte[100000];
int bytesRead;
response.setContentType(mimeType);
while ((bytesRead = in.read(bytes)) != -1) {
out.write(bytes, 0, bytesRead);
}
logger.info("End");
I believe this might work, my problem right now is that :
bytesRead = in.read(bytes)
reads only 61894 bytes and that's it :( why is that happening? am i trying to get the stream wrong?
btw: i tried to do this with xuggler, but i had an error that compressed-SWF not supported.
thanks
Your code is working perfectly. I just fetched ads5.swf from your server and it is, indeed, 61894 bytes in length. The problem you're facing is that the SWF file is just the movie player. After being downloaded, the player then fetches the video stream from the server. By default (if this is some kind of turn-key streaming solution), it's probably trying to get the stream from the same server where the SWF comes from.
I would like to implement a function where you send a URL of a photo and my server will automatically download and store it in a specified folder.
I studied some use cases, but as a beginner in this area of the web, I was a bit lost. I thought about FTP but is not exactly what I want.
Like that, function on my webservice (using Java + Tomcat + AXIS2)
void getPhoto(URL url){
//receive a photo and store at folder /photos
}
but, I don't know what use, I was looking for some httppost or httpget, should I still looking for in this way? Has a dummie sample, to show me the basic way?
I would like to implement a function where you send a URL of a photo and my server will automatically download and store it in a specified folder.
That's not exactly "uploading", but just "downloading".
Just call openStream() on the URL and you've an InputStream which you can do anything with. Writing to a FileOutputStream for example.
InputStream input = url.openStream();
// ...
hey use this code to download.
try {
URL url = new URL(url of file );
URLConnection conection = url.openConnection();
conection.connect();
InputStream input = new BufferedInputStream(url.openStream());
String downloadloc = "D:\"; // or anything
OutputStream output = new FileOutputStream(downloadloc
+ "\name of file.ext");
byte data[] = new byte[1024];
long total = 0;
while ((count = input.read(data)) != -1) {
total += count;
output.write(data, 0, count);
}
output.flush();
output.close();
input.close();
} catch (Exception e) {
}
You want to look at using an HttpURLConnection, call it's 'connect' and 'getInputStream' methods, continually reading from that stream and writing that data to a file with e.g. a FileOutputStream.
To download a file using a URL, as an alternative to what suggested by others, you can take a look to Apache Commons HttpClient.
There is also a well written tutorial.
I have a Servlet which is returning a csv file that is 'working' over HTTP in both internet explorer and firefox. When I execute the same Servlet over HTTPS only firefox continues to download the csv file over HTTPS. I don't think this is necessarily an Internet 6 or 7 issue described on MSDN :
The message is:
Internet Explorer cannot download
data.csv from mydomain.com Internet
Explorer was not able to open this
Internet site. The requested site is
either unavailable or cannot be found.
Please try again later.
Please note that the site is still 'up' after this message and you can continue to browse the site, its just the download of the CSV that prompts this message. I have been able to access similar files over https on IE from other j2ee applications so I believe it is our code. Should we not be closing the bufferedOutputStream?
UPDATE
whether to close or not to close the output stream:
I asked this question on the java posse forums and the discussion there is also insightful. In the end it seems that no container should rely on the 'client' (your servlet code in this case) to close this output stream. So if your failure to close the stream in your servlet causes a problem it is more a reflection on the poor implementation of your servlet container than your code. I sited the behavior of the IDEs and tutortials from Sun, Oracle and BEA and how they are also inconsistent in whether they close the stream or not.
About IE specific behavior: In our case a separate product 'Oracle Web Cache' was introducing the additional header values which impacts Internet explorer only because of the way IE implements the 'No Cache' requirement (see the MSDN article).
The code is:
public class DownloadServlet extends HttpServlet {
public void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException,
IOException {
ServletOutputStream out = null;
ByteArrayInputStream byteArrayInputStream = null;
BufferedOutputStream bufferedOutputStream = null;
try {
response.setContentType("text/csv");
String disposition = "attachment; fileName=data.csv";
response.setHeader("Content-Disposition", disposition);
out = response.getOutputStream();
byte[] blobData = dao.getCSV();
//setup the input as the blob to write out to the client
byteArrayInputStream = new ByteArrayInputStream(blobData);
bufferedOutputStream = new BufferedOutputStream(out);
int length = blobData.length;
response.setContentLength(length);
//byte[] buff = new byte[length];
byte[] buff = new byte[(1024 * 1024) * 2];
//now lets shove the data down
int bytesRead;
// Simple read/write loop.
while (-1 !=
(bytesRead = byteArrayInputStream.read(buff, 0, buff.length))) {
bufferedOutputStream.write(buff, 0, bytesRead);
}
out.flush();
out.close();
} catch (Exception e) {
System.err.println(e); throw e;
} finally {
if (out != null)
out.close();
if (byteArrayInputStream != null) {
byteArrayInputStream.close();
}
if (bufferedOutputStream != null) {
bufferedOutputStream.close();
}
}
}
I am really confused about your "from back through the breast into the head" write mechanism. Why not simple (the servlet output stream will be bufferend, thats container stuff):
byte[] csv = dao.getCSV();
response.setContentType("text/csv");
response.setHeader("Content-Disposition", "attachment; filename=data.csv"));
reponse.setContentLength(csv.length);
ServletOutputStream out = response.getOutputStream();
out.write(csv);
There should also be no need to flush the output stream nor to close.
The header content should not be parsed case sensitive by IE, but who knows: do not camelcase fileName. The next question is the encoding. CSV is text, so you should use getWriter() instead or getOutputStream() and set the content type to "text/csv; charset=UTF-8" for example. But the dao should provide the CSV as String instead of byte[].
The servlet code has nothing to d with HTTPS, so the protocol does not matter from the server side. You may test the servlet from localhost with HTTP i hope.
What about filters in your application? A filter may als set an HTTP header (or as footer) with cache-control for example.