I am making an Android app that uses a WebView to access to a webpage. To handle downloads I am using AsyncTask in method onDownloadStart of WebView's DownloadListener. However files downloaded are blank (although the filename and extension are correct). My Java code is this:
protected String doInBackground(String... url) {
try {
URL url = new URL(url[0]);
//Creating directory if not exists
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("GET");
connection.setDoOutput(true);
connection.connect();
//Obtaining filename
File outputFile = new File(directory, filename);
InputStream input = new BufferedInputStream(connection.getInputStream());
OutputStream output = new FileOutputStream(outputFile);
byte data[] = new byte[1024];
int count = 0;
Log.e(null, "input.read(data) = "+input.read(data), null);
while ((count = input.read(data)) != -1) {
output.write(data, 0, count);
}
connection.disconnect();
output.flush();
output.close();
input.close();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
log.e line gives -1 value for input.read(data).
PHP code of download page is this (works in all platforms). Files are stored in non-public directories of my HTML server.
<?php
$guid = $_GET['id'];
$file = get_file($guid);
if (isset($file['path'])) {
$mime = $file['MIMEType'];
if (!$mime) {
$mime = "application/octet-stream";
}
header("Pragma: public");
header("Content-type: $mime");
header("Content-Disposition: attachment; filename=\"{$file['filename']}\"");
header('Content-Transfer-Encoding: binary');
ob_clean();
flush();
readfile($file['path']);
exit();
}
?>
I've noticed that if I write some text after "?>" of PHP file, this text is written in the file downloaded.
In your code, you are using ob_clean(), which will just erase the output buffer. Your subsequent call to flush() therefore doesn't return anything, because the output buffer has been flushed beforehand.
Instead of ob_clean() and flush(), use ob_end_flush(). This will stop output buffering and it will send all the output it withheld.
ob_end_flush — Flush (send) the output buffer and turn off output buffering
If you want to stop output buffering without outputting whatever is saved, you can use ob_end_clean(). Anything after this command will be output again, but anything between ob_start() and ob_end_clean() will be "swallowed."
ob_end_clean — Clean (erase) the output buffer and turn off output buffering
What are the benefits of output buffering in the first place? If you are doing ob_start() and then using flush() on everything you might as well output everything directly.
Related
I'm writing a program that builds stuff in a GUI (blah blah blah... irrelevant details), and the user is allowed to export that data as a .tex file which can be compiled to a PDF. Since I don't really want to assume they have a TeX environment installed, I'm using an API (latexonline.cc). That way, I can construct an HTTP GET request, send it to the API, then (hopefully!) return the PDF in a byte-stream. The issue, though, is that when I submit the request, I'm only getting the page data back from the request instead of the data from the PDF. I'm not sure if it's because of how I'm doing my request or not...
Here's the code:
... // preceding code
DataOutputStream dos = new DataOutputStream(new FileOutputStream("test.pdf"));
StringBuilder httpTex = new StringBuilder();
httpTex.append(this.getTexCode(...)); // This appends the TeX code (nothing wrong here)
// Build the URL and HTTP request.
String texURL = "https://latexonline.cc/compile?text=";
String paramURL = URLEncoder.encode(httpTex.toString(), "UTF-8");
URL url = new URL(texURL + paramURL);
byte[] buffer = new byte[1024];
try {
InputStream is = url.openStream();
int bufferLen = -1;
while ((bufferLen = is.read(buffer)) > -1) {
this.getOutputStream().write(buffer, 0, bufferLen);
}
dos.close();
is.close();
} catch (IOException ex) {
ex.printStackTrace();
}
Edit: Here's the data I'm getting from the GET request:
https://pastebin.com/qYtGXUsd
Solved! I used a different API and it works perfectly.
https://github.com/YtoTech/latex-on-http
I am trying to download a PDF file with HttpClient, it is downloading the PDF file but pages are blank. I can see the bytes on console from response if I print them. But when I try to write it to file it is producing a blank file.
FileUtils.writeByteArrayToFile(new File(outputFilePath), bytes);
However the file is showing correct size of 103KB and 297KB as expected but its just blank!!
I tried with Output stream as well like:
FileOutputStream fileOutputStream = new FileOutputStream(outFile);
fileOutputStream.write(bytes);
Also tried to write with UTF-8 coding like:
Writer out = new BufferedWriter( new OutputStreamWriter(
new FileOutputStream(outFile), "UTF-8"));
String str = new String(bytes, StandardCharsets.UTF_8);
try {
out.write(str);
} finally {
out.close();
}
Nothing is working for me. Any suggestion is highly appreciated..
Update: I am using DefaultHttpClient.
HttpGet httpget = new HttpGet(targetURI);
HttpResponse response = null;
String htmlContents = null;
try {
httpget = new HttpGet(url);
response = httpclient.execute(httpget);
InputStreamReader dataStream=new InputStreamReader(response.getEntity().getContent());
byte[] bytes = IOUtils.toByteArray(dataStream);
...
You do
InputStreamReader dataStream=new InputStreamReader(response.getEntity().getContent());
byte[] bytes = IOUtils.toByteArray(dataStream);
As has already been mentioned in comments, using a Reader class can damage binary data, e.g. PDF files. Thus, you should not wrap your content in an InputStreamReader.
As your content can be used to construct an InputStreamReader, though, I assume response.getEntity().getContent() returns an InputStream. Such an InputStream usually can be directly used as IOUtils.toByteArray argument.
So:
InputStream dataStream=response.getEntity().getContent();
byte[] bytes = IOUtils.toByteArray(dataStream);
should already work for you!
Here is a method I use to download a PDF file from a specific URL. The method requires two string arguments, an url string (example: "https://www.ibm.com/support/knowledgecenter/SSWRCJ_4.1.0/com.ibm.safos.doc_4.1/Planning_and_Installation.pdf") and a destination folder path to download the PDF file (or whatever) into. If the destination path does not exist within the local file system then it is automatically created:
public boolean downloadFile(String urlString, String destinationFolderPath) {
boolean result = false; // will turn to true if download is successful
if (!destinationFolderPath.endsWith("/") && !destinationFolderPath.endsWith("\\")) {
destinationFolderPath+= "/";
}
// If the destination path does not exist then create it.
File foldersToMake = new File(destinationFolderPath);
if (!foldersToMake.exists()) {
foldersToMake.mkdirs();
}
try {
// Open Connection
URL url = new URL(urlString);
// Get just the file Name from URL
String fileName = new File(url.getPath()).getName();
// Try with Resources....
try (InputStream in = url.openStream(); FileOutputStream outStream =
new FileOutputStream(new File(destinationFolderPath + fileName))) {
// Read from resource and write to file...
int length = -1;
byte[] buffer = new byte[1024]; // buffer for portion of data from connection
while ((length = in.read(buffer)) > -1) {
outStream.write(buffer, 0, length);
}
}
// File Successfully Downloaded");
result = true;
}
catch (MalformedURLException ex) { ex.printStackTrace(); }
catch (IOException ex) { ex.printStackTrace(); }
return result;
}
I am working on a task where I need to download multiple files from a single hyperlink. When I call the one API link I want it to return multiple files.
My current code is only downloading a single file:
try {
URL url = new URL(f_url[0]);
URLConnection conection = url.openConnection();
conection.connect();
// getting file length
int lenghtOfFile = conection.getContentLength();
// input stream to read file - with 8k buffer
InputStream input = new BufferedInputStream(url.openStream(), 8192);
// Output stream to write file
OutputStream output = new FileOutputStream("/sdcard/downloadedfile.jpg");
byte data[] = new byte[1024];
long total = 0;
while ((count = input.read(data)) != -1) {
total += count;
// publishing the progress....
// After this onProgressUpdate will be called
publishProgress(""+(int)((total*100)/lenghtOfFile));
// writing data to file
output.write(data, 0, count);
}
// flushing output
output.flush();
// closing streams
output.close();
input.close();
} catch (Exception e) {
Log.e("Error: ", e.getMessage());
}
check this site . It downloads multiple files and shows its content on your screen
I need a very simple function that allows me to read the first 1k bytes of a file through FTP. I want to use it in MATLAB to read the first lines and, according to some parameters, to download only files I really need eventually. I found some examples online that unfortunately do not work. Here I'm proposing the sample code where I'm trying to download one single file (I'm using the Apache libraries).
FTPClient client = new FTPClient();
FileOutputStream fos = null;
try {
client.connect("data.site.org");
// filename to be downloaded.
String filename = "filename.Z";
fos = new FileOutputStream(filename);
// Download file from FTP server
InputStream stream = client.retrieveFileStream("/pub/obs/2008/021/ab120210.08d.Z");
byte[] b = new byte[1024];
stream.read(b);
fos.write(b);
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (fos != null) {
fos.close();
}
client.disconnect();
} catch (IOException e) {
e.printStackTrace();
}
}
the error is in stream which is returned empty. I know I'm passing the folder name in a wrong way, but I cannot understand how I have to do. I've tried in many way.
I've also tried with the URL's Java classes as:
URL url;
url = new URL("ftp://data.site.org/pub/obs/2008/021/ab120210.08d.Z");
URLConnection con = url.openConnection();
BufferedInputStream in =
new BufferedInputStream(con.getInputStream());
FileOutputStream out =
new FileOutputStream("C:\\filename.Z");
int i;
byte[] bytesIn = new byte[1024];
if ((i = in.read(bytesIn)) >= 0) {
out.write(bytesIn);
}
out.close();
in.close();
but it is giving an error when I'm closing the InputStream in!
I'm definitely stuck. Some comments about would be very useful!
Try this test
InputStream is = new URL("ftp://test:test#ftp.secureftp-test.com/bookstore.xml").openStream();
byte[] a = new byte[1000];
int n = is.read(a);
is.close();
System.out.println(new String(a, 0, n));
it definitely works
From my experience when you read bytes from a stream acquired from ftpClient.retrieveFileStream, for the first run it is not guarantied that you get your byte buffer filled up. However, either you should read the return value of stream.read(b); surrounded with a cycle based on it or use an advanced library to fill up the 1024 length byte[] buffer:
InputStream stream = null;
try {
// Download file from FTP server
stream = client.retrieveFileStream("/pub/obs/2008/021/ab120210.08d.Z");
byte[] b = new byte[1024];
IOUtils.read(stream, b); // will call periodically stream.read() until it fills up your buffer or reaches end-of-file
fos.write(b);
} catch (IOException e) {
e.printStackTrace();
} finally {
IOUtils.closeQuietly(inputStream);
}
I cannot understand why it doesn't work. I found this link where they used the Apache library to read 4096 bytes each time. I read the first 1024 bytes and it works eventually, the only thing is that if completePendingCommand() is used, the program is held for ever. Thus I've removed it and everything works fine.
I want to download a HTTP query with java, but the file I download has an undetermined length when downloading.
I thought this would be quite standard, so I searched and found a code snippet for it: http://snipplr.com/view/33805/
But it has a problem with the contentLength variable. As the length is unknown, I get -1 back. This creates an error. When I omit the entire check about contentLength, that means I always have to use the maximum buffer.
But the problem is that the file is not ready yet. So the flush gets only partially filled, and parts of the file get lost.
If you try downloading a link like http://overpass-api.de/api/interpreter?data=area%5Bname%3D%22Hoogstade%22%5D%3B%0A%28%0A++node%28area%29%3B%0A++%3C%3B%0A%29+%3B%0Aout+meta+qt%3B with that snippet, you'll notice the error, and when you always download the maximum buffer to omit the error, you end up with a corrupt XML file.
Is there some way to only download the ready part of the file? I would like if this could download big files (up to a few GB).
This should work, i tested it and it works for me:
void downloadFromUrl(URL url, String localFilename) throws IOException {
InputStream is = null;
FileOutputStream fos = null;
try {
URLConnection urlConn = url.openConnection();//connect
is = urlConn.getInputStream(); //get connection inputstream
fos = new FileOutputStream(localFilename); //open outputstream to local file
byte[] buffer = new byte[4096]; //declare 4KB buffer
int len;
//while we have availble data, continue downloading and storing to local file
while ((len = is.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
} finally {
try {
if (is != null) {
is.close();
}
} finally {
if (fos != null) {
fos.close();
}
}
}
}
If you want this to run in background, simply call it in a Thread:
Thread download = new Thread(){
public void run(){
URL url= new URL("http://overpass-api.de/api/interpreter?data=area%5Bname%3D%22Hoogstade%22%5D%3B%0A%28%0A++node%28area%29%3B%0A++%3C%3B%0A%29+%3B%0Aout+meta+qt%3B");
String localFilename="mylocalfile"; //needs to be replaced with local file path
downloadFromUrl(url, localFilename);
}
};
download.start();//start the thread