Uploading a user file on discord bot - java

I'm trying to have users submit files on discord that are then uploaded to a site for processing. However, there seem to be something wrong wiht my code.
When i upload a file i get respone code 400 (bad request) but the site should accept files of w3g format.
private static void onFileUpload(Message.Attachment attachment, TextChannel channel) {
// api.wc3stats.com/upload
String fileName = attachment.getFileName();
if (fileName.substring(fileName.length() - 3).equals("w3g")) {
File file = new File(attachment.getFileName());
attachment.downloadToFile(file);
try {
HttpHelper.postFile(file, "https://api.wc3stats.com/upload");
channel.sendMessage("Uploading: " + file.toString()).queue();
} catch (IOException e) {
e.printStackTrace();
channel.sendMessage(e.getMessage()).queue();
}
System.out.println("Deleted");
file.delete();
} else {
channel.sendMessage("Invalid file type.").queue();
}
}
public static void postFile(File file, String url) throws IOException {
HttpURLConnection connection = null;
FileInputStream fis = null;
BufferedInputStream bis = null;
BufferedOutputStream out = null;
try {
URL urlForPostRequest = new URL(url);
connection = (HttpURLConnection) urlForPostRequest.openConnection();
connection.setRequestMethod("POST");
connection.setDoOutput(true);
//if (responseCode == HttpURLConnection.HTTP_OK) {
fis = new FileInputStream(file);
bis = new BufferedInputStream(fis);
out = new BufferedOutputStream(connection.getOutputStream());
byte[] buffer = new byte[8192];
int i;
while ((i = bis.read(buffer)) > 0) {
out.write(buffer, 0, i);
}
//}
int responseCode = connection.getResponseCode();
System.out.println("Response: " + connection.getResponseMessage());
System.out.println(responseCode);
} finally {
if (fis != null) fis.close();
if (bis != null) bis.close();
if (out != null) out.close();
System.out.println("closed");
}
}
The current exception it's throwing me is that the file cannot be found. But I can clearly see it in the program directory after it's been attached to discord and downloaded. This also only happens the first time I upload the file. The second time i upload the same file it just gives me the bad request response.
First upload attempt LastReplay.w3g output:
java.io.FileNotFoundException: lol.w3g
C:\SomePath\lol.w3g
cannot be read
closed
Deleted
I can also note that the file now exists in the program directory. Though it should've been removed after everything is complete.
Second upload attempt of LastReplay.w3g output
C:\SomePath\lol.w3g
Can be read
Response: Bad Request
closed
Deleted

I managed to do it using Javascript with help using request-promise, as for java i did not...
if (msg.attachments.size > 0) {
let attached = msg.attachments.array()[0];
const request = require('request-promise');
let formData = {
file: request (attached.url)
};
request.post('https://api.wc3stats.com/upload', {formData, json: true})
.then(function (json) {
// on success handle response json
})
.catch(function (err) {
console.log(err);
});
}

Related

Download File from Direct Download URL

I'm trying to download the following the following file, with this link that redirects you to a direct download: http://www.lavozdegalicia.es/sitemap_sections.xml.gz
I've done my own research, but all the results I see are related to HTTP URL redirections
[3xx] and not to direct download redirections (maybe I'm using the wrong terms to do the research).
I've tried the following pieces of code (cite: https://programmerclick.com/article/7719159084/ ):
// Using Java IO
private static void downloadFileFromUrlWithJavaIO(String fileName, String fileUrl) {
BufferedInputStream inputStream = null;
FileOutputStream outputStream = null;
try {
URL url = new URL(fileUrl);
inputStream = new BufferedInputStream(url.openStream());
outputStream = new FileOutputStream(fileName);
byte data[] = new byte[1024];
int count;
while ((count = inputStream.read(data, 0, 1024)) != -1) {
outputStream.write(data, 0, count);
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (inputStream != null) {
inputStream.close();
}
if (outputStream != null) {
outputStream.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
// Using Apache common IO
private static void downloadFileFromUrlWithCommonsIO(String fileName, String fileUrl) {
try {
FileUtils.copyURLToFile(new URL(fileUrl), new File(fileName));
} catch (IOException e) {
e.printStackTrace();
}
}
// Using NIO
private static void downloadFileFromURLUsingNIO(String fileName, String fileUrl) {
try {
URL url = new URL(fileUrl);
ReadableByteChannel readableByteChannel = Channels.newChannel(url.openStream());
FileOutputStream fileOutputStream = new FileOutputStream(fileName);
fileOutputStream.getChannel().transferFrom(readableByteChannel, 0, Long.MAX_VALUE);
fileOutputStream.close();
readableByteChannel.close();
} catch (IOException e) {
e.printStackTrace();
}
}
But the result I get with any of the three options is an empty file, my thoughts are that the problem is related to the file being a .xml.gz because when I debug it the inputStream doesn't seem to have any content.
I ran out of options, anyone has an idea of how to handle this case, or what would be the correct terms I should use to research about this specific case?
I found a solution, there's probably a more polite way of achieving the same result but this worked fine for me:
//Download the file and decompress it
filecount=0;
URL compressedSitemap = new URL(urlString);
HttpURLConnection con = (HttpURLConnection) compressedSitemap.openConnection();
con.setRequestMethod("GET");
if (con.getResponseCode() == HttpURLConnection.HTTP_MOVED_TEMP || con.getResponseCode() == HttpURLConnection.HTTP_MOVED_PERM) {
String location = con.getHeaderField("Location");
URL newUrl = new URL(location);
con = (HttpURLConnection) newUrl.openConnection();
}
String file = "/home/user/Documentos/Decompression/decompressed" + filecount + ".xml";
GZIPInputStream gzipInputStream = new GZIPInputStream(con.getInputStream());
FileOutputStream fos = new FileOutputStream(file);
byte[] buffer = new byte[1024];
int len = 0;
while ((len = gzipInputStream.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
fos.close();
filecount++;
Two things to note:
When I was trying to do HTTPGet the url that was a redirect, the response code was 301 or 302 (depending on the example I used), I overcame this problem with the if check, that follows the redirect and aims to the downloaded file.
Once aiming the file, to get the content of the compressed file I found the GZIPInputStream package, that allowed me to get an inputStream directly from the compressed file and dump it on an xml file, that saved me the time of doing it on three steps (decompress, read, copy).

How to download a file from Internet using Java, with URL query parameters [duplicate]

There is an online file (such as http://www.example.com/information.asp) I need to grab and save to a directory. I know there are several methods for grabbing and reading online files (URLs) line-by-line, but is there a way to just download and save the file using Java?
Give Java NIO a try:
URL website = new URL("http://www.website.com/information.asp");
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream("information.html");
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
Using transferFrom() is potentially much more efficient than a simple loop that reads from the source channel and writes to this channel. Many operating systems can transfer bytes directly from the source channel into the filesystem cache without actually copying them.
Check more about it here.
Note: The third parameter in transferFrom is the maximum number of bytes to transfer. Integer.MAX_VALUE will transfer at most 2^31 bytes, Long.MAX_VALUE will allow at most 2^63 bytes (larger than any file in existence).
Use Apache Commons IO. It is just one line of code:
FileUtils.copyURLToFile(URL, File)
Simpler non-blocking I/O usage:
URL website = new URL("http://www.website.com/information.asp");
try (InputStream in = website.openStream()) {
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING);
}
public void saveUrl(final String filename, final String urlString)
throws MalformedURLException, IOException {
BufferedInputStream in = null;
FileOutputStream fout = null;
try {
in = new BufferedInputStream(new URL(urlString).openStream());
fout = new FileOutputStream(filename);
final byte data[] = new byte[1024];
int count;
while ((count = in.read(data, 0, 1024)) != -1) {
fout.write(data, 0, count);
}
} finally {
if (in != null) {
in.close();
}
if (fout != null) {
fout.close();
}
}
}
You'll need to handle exceptions, probably external to this method.
Here is a concise, readable, JDK-only solution with properly closed resources:
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
Two lines of code and no dependencies.
Here's a complete file downloader example program with output, error checking, and command line argument checks:
package so.downloader;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Application {
public static void main(String[] args) throws IOException {
if (2 != args.length) {
System.out.println("USAGE: java -jar so-downloader.jar <source-URL> <target-filename>");
System.exit(1);
}
String sourceUrl = args[0];
String targetFilename = args[1];
long bytesDownloaded = download(sourceUrl, targetFilename);
System.out.println(String.format("Downloaded %d bytes from %s to %s.", bytesDownloaded, sourceUrl, targetFilename));
}
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
}
As noted in the so-downloader repository README:
To run file download program:
java -jar so-downloader.jar <source-URL> <target-filename>
For example:
java -jar so-downloader.jar https://github.com/JanStureNielsen/so-downloader/archive/main.zip so-downloader-source.zip
Downloading a file requires you to read it. Either way, you will have to go through the file in some way. Instead of line by line, you can just read it by bytes from the stream:
BufferedInputStream in = new BufferedInputStream(new URL("http://www.website.com/information.asp").openStream())
byte data[] = new byte[1024];
int count;
while((count = in.read(data, 0, 1024)) != -1)
{
out.write(data, 0, count);
}
When using Java 7+, use the following method to download a file from the Internet and save it to some directory:
private static Path download(String sourceURL, String targetDirectory) throws IOException
{
URL url = new URL(sourceURL);
String fileName = sourceURL.substring(sourceURL.lastIndexOf('/') + 1, sourceURL.length());
Path targetPath = new File(targetDirectory + File.separator + fileName).toPath();
Files.copy(url.openStream(), targetPath, StandardCopyOption.REPLACE_EXISTING);
return targetPath;
}
Documentation is here.
This answer is almost exactly like the selected answer, but with two enhancements: it's a method and it closes out the FileOutputStream object:
public static void downloadFileFromURL(String urlString, File destination) {
try {
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
fos.close();
rbc.close();
} catch (IOException e) {
e.printStackTrace();
}
}
import java.io.*;
import java.net.*;
public class filedown {
public static void download(String address, String localFileName) {
OutputStream out = null;
URLConnection conn = null;
InputStream in = null;
try {
URL url = new URL(address);
out = new BufferedOutputStream(new FileOutputStream(localFileName));
conn = url.openConnection();
in = conn.getInputStream();
byte[] buffer = new byte[1024];
int numRead;
long numWritten = 0;
while ((numRead = in.read(buffer)) != -1) {
out.write(buffer, 0, numRead);
numWritten += numRead;
}
System.out.println(localFileName + "\t" + numWritten);
}
catch (Exception exception) {
exception.printStackTrace();
}
finally {
try {
if (in != null) {
in.close();
}
if (out != null) {
out.close();
}
}
catch (IOException ioe) {
}
}
}
public static void download(String address) {
int lastSlashIndex = address.lastIndexOf('/');
if (lastSlashIndex >= 0 &&
lastSlashIndex < address.length() - 1) {
download(address, (new URL(address)).getFile());
}
else {
System.err.println("Could not figure out local file name for "+address);
}
}
public static void main(String[] args) {
for (int i = 0; i < args.length; i++) {
download(args[i]);
}
}
}
Personally, I've found Apache's HttpClient to be more than capable of everything I've needed to do with regards to this. Here is a great tutorial on using HttpClient
This is another Java 7 variant based on Brian Risk's answer with usage of a try-with statement:
public static void downloadFileFromURL(String urlString, File destination) throws Throwable {
URL website = new URL(urlString);
try(
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
) {
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
}
}
There are many elegant and efficient answers here. But the conciseness can make us lose some useful information. In particular, one often does not want to consider a connection error an Exception, and one might want to treat differently some kind of network-related errors - for example, to decide if we should retry the download.
Here's a method that does not throw Exceptions for network errors (only for truly exceptional problems, as malformed url or problems writing to the file)
/**
* Downloads from a (http/https) URL and saves to a file.
* Does not consider a connection error an Exception. Instead it returns:
*
* 0=ok
* 1=connection interrupted, timeout (but something was read)
* 2=not found (FileNotFoundException) (404)
* 3=server error (500...)
* 4=could not connect: connection timeout (no internet?) java.net.SocketTimeoutException
* 5=could not connect: (server down?) java.net.ConnectException
* 6=could not resolve host (bad host, or no internet - no dns)
*
* #param file File to write. Parent directory will be created if necessary
* #param url http/https url to connect
* #param secsConnectTimeout Seconds to wait for connection establishment
* #param secsReadTimeout Read timeout in seconds - trasmission will abort if it freezes more than this
* #return See above
* #throws IOException Only if URL is malformed or if could not create the file
*/
public static int saveUrl(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout) throws IOException {
Files.createDirectories(file.getParent()); // make sure parent dir exists , this can throw exception
URLConnection conn = url.openConnection(); // can throw exception if bad url
if( secsConnectTimeout > 0 ) conn.setConnectTimeout(secsConnectTimeout * 1000);
if( secsReadTimeout > 0 ) conn.setReadTimeout(secsReadTimeout * 1000);
int ret = 0;
boolean somethingRead = false;
try (InputStream is = conn.getInputStream()) {
try (BufferedInputStream in = new BufferedInputStream(is); OutputStream fout = Files
.newOutputStream(file)) {
final byte data[] = new byte[8192];
int count;
while((count = in.read(data)) > 0) {
somethingRead = true;
fout.write(data, 0, count);
}
}
} catch(java.io.IOException e) {
int httpcode = 999;
try {
httpcode = ((HttpURLConnection) conn).getResponseCode();
} catch(Exception ee) {}
if( somethingRead && e instanceof java.net.SocketTimeoutException ) ret = 1;
else if( e instanceof FileNotFoundException && httpcode >= 400 && httpcode < 500 ) ret = 2;
else if( httpcode >= 400 && httpcode < 600 ) ret = 3;
else if( e instanceof java.net.SocketTimeoutException ) ret = 4;
else if( e instanceof java.net.ConnectException ) ret = 5;
else if( e instanceof java.net.UnknownHostException ) ret = 6;
else throw e;
}
return ret;
}
It's possible to download the file with with Apache's HttpComponents instead of Commons IO. This code allows you to download a file in Java according to its URL and save it at the specific destination.
public static boolean saveFile(URL fileURL, String fileSavePath) {
boolean isSucceed = true;
CloseableHttpClient httpClient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet(fileURL.toString());
httpGet.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0");
httpGet.addHeader("Referer", "https://www.google.com");
try {
CloseableHttpResponse httpResponse = httpClient.execute(httpGet);
HttpEntity fileEntity = httpResponse.getEntity();
if (fileEntity != null) {
FileUtils.copyInputStreamToFile(fileEntity.getContent(), new File(fileSavePath));
}
} catch (IOException e) {
isSucceed = false;
}
httpGet.releaseConnection();
return isSucceed;
}
In contrast to the single line of code:
FileUtils.copyURLToFile(fileURL, new File(fileSavePath),
URLS_FETCH_TIMEOUT, URLS_FETCH_TIMEOUT);
This code will give you more control over a process and let you specify not only time-outs, but User-Agent and Referer values, which are critical for many websites.
Below is the sample code to download a movie from the Internet with Java code:
URL url = new
URL("http://103.66.178.220/ftp/HDD2/Hindi%20Movies/2018/Hichki%202018.mkv");
BufferedInputStream bufferedInputStream = new BufferedInputStream(url.openStream());
FileOutputStream stream = new FileOutputStream("/home/sachin/Desktop/test.mkv");
int count = 0;
byte[] b1 = new byte[100];
while((count = bufferedInputStream.read(b1)) != -1) {
System.out.println("b1:" + b1 + ">>" + count + ">> KB downloaded:" + new File("/home/sachin/Desktop/test.mkv").length()/1024);
stream.write(b1, 0, count);
}
Solution on java.net.http.HttpClient using Authorization:
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.GET()
.header("Accept", "application/json")
// .header("Authorization", "Basic ci5raG9kemhhZXY6NDdiYdfjlmNUM=") if you need
.uri(URI.create("https://jira.google.ru/secure/attachment/234096/screenshot-1.png"))
.build();
HttpResponse<InputStream> response = client.send(request, HttpResponse.BodyHandlers.ofInputStream());
try (InputStream in = response.body()) {
Files.copy(in, Paths.get(target + "screenshot-1.png"), StandardCopyOption.REPLACE_EXISTING);
}
To summarize (and somehow polish and update) previous answers. The three following methods are practically equivalent. (I added explicit timeouts, because I think they are a must. Nobody wants a download to freeze forever when the connection is lost.)
public static void saveUrl1(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (BufferedInputStream in = new BufferedInputStream(
streamFromUrl(url, secsConnectTimeout,secsReadTimeout));
OutputStream fout = Files.newOutputStream(file)) {
final byte data[] = new byte[8192];
int count;
while((count = in.read(data)) > 0)
fout.write(data, 0, count);
}
}
public static void saveUrl2(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (ReadableByteChannel rbc = Channels.newChannel(
streamFromUrl(url, secsConnectTimeout, secsReadTimeout)
);
FileChannel channel = FileChannel.open(file,
StandardOpenOption.CREATE,
StandardOpenOption.TRUNCATE_EXISTING,
StandardOpenOption.WRITE)
) {
channel.transferFrom(rbc, 0, Long.MAX_VALUE);
}
}
public static void saveUrl3(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (InputStream in = streamFromUrl(url, secsConnectTimeout,secsReadTimeout) ) {
Files.copy(in, file, StandardCopyOption.REPLACE_EXISTING);
}
}
public static InputStream streamFromUrl(URL url,int secsConnectTimeout,int secsReadTimeout) throws IOException {
URLConnection conn = url.openConnection();
if(secsConnectTimeout>0)
conn.setConnectTimeout(secsConnectTimeout*1000);
if(secsReadTimeout>0)
conn.setReadTimeout(secsReadTimeout*1000);
return conn.getInputStream();
}
I don't find significant differences, and all seem right to me. They are safe and efficient. (Differences in speed seem hardly relevant - I write 180 MB from the local server to a SSD disk in times that fluctuate around 1.2 to 1.5 secs). They don't require external libraries. All work with arbitrary sizes and (to my experience) HTTP redirections.
Additionally, all throw FileNotFoundException if the resource is not found (error 404, typically), and java.net.UnknownHostException if the DNS resolution failed; other IOException correspond to errors during transmission.
There is a method, U.fetch(url), in the underscore-java library.
File pom.xml:
<dependency>
<groupId>com.github.javadev</groupId>
<artifactId>underscore</artifactId>
<version>1.84</version>
</dependency>
Code example:
import com.github.underscore.U;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Download {
public static void main(String[] args) throws IOException {
Files.write(Paths.get("data.bin"),
U.fetch("https://stackoverflow.com/questions"
+ "/921262/how-to-download-and-save-a-file-from-internet-using-java").blob());
}
}
You can do this in one line using netloader for Java:
new NetFile(new File("my/zips/1.zip"), "https://example.com/example.zip", -1).load(); // Returns true if succeed, otherwise false.
This can read a file on the Internet and write it into a file.
import java.net.URL;
import java.io.FileOutputStream;
import java.io.File;
public class Download {
public static void main(String[] args) throws Exception {
URL url = new URL("https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png"); // Input URL
FileOutputStream out = new FileOutputStream(new File("out.png")); // Output file
out.write(url.openStream().readAllBytes());
out.close();
}
}
There is an issue with simple usage of:
org.apache.commons.io.FileUtils.copyURLToFile(URL, File)
if you need to download and save very large files, or in general if you need automatic retries in case connection is dropped.
I suggest Apache HttpClient in such cases, along with org.apache.commons.io.FileUtils. For example:
GetMethod method = new GetMethod(resource_url);
try {
int statusCode = client.executeMethod(method);
if (statusCode != HttpStatus.SC_OK) {
logger.error("Get method failed: " + method.getStatusLine());
}
org.apache.commons.io.FileUtils.copyInputStreamToFile(
method.getResponseBodyAsStream(), new File(resource_file));
} catch (HttpException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
method.releaseConnection();
}
First method using the new channel
ReadableByteChannel aq = Channels.newChannel(new url("https//asd/abc.txt").openStream());
FileOutputStream fileOS = new FileOutputStream("C:Users/local/abc.txt")
FileChannel writech = fileOS.getChannel();
Second method using FileUtils
FileUtils.copyURLToFile(new url("https//asd/abc.txt", new local file on system("C":/Users/system/abc.txt"));
Third method using
InputStream xy = new ("https//asd/abc.txt").openStream();
This is how we can download file by using basic Java code and other third-party libraries. These are just for quick reference. Please google with the above keywords to get detailed information and other options.
If you are behind a proxy, you can set the proxies in the Java program as below:
Properties systemSettings = System.getProperties();
systemSettings.put("proxySet", "true");
systemSettings.put("https.proxyHost", "HTTPS proxy of your org");
systemSettings.put("https.proxyPort", "8080");
If you are not behind a proxy, don't include the lines above in your code. Full working code to download a file when you are behind a proxy.
public static void main(String[] args) throws IOException {
String url = "https://raw.githubusercontent.com/bpjoshi/fxservice/master/src/test/java/com/bpjoshi/fxservice/api/TradeControllerTest.java";
OutputStream outStream = null;
URLConnection connection = null;
InputStream is = null;
File targetFile = null;
URL server = null;
// Setting up proxies
Properties systemSettings = System.getProperties();
systemSettings.put("proxySet", "true");
systemSettings.put("https.proxyHost", "HTTPS proxy of my organisation");
systemSettings.put("https.proxyPort", "8080");
// The same way we could also set proxy for HTTP
System.setProperty("java.net.useSystemProxies", "true");
// Code to fetch file
try {
server = new URL(url);
connection = server.openConnection();
is = connection.getInputStream();
byte[] buffer = new byte[is.available()];
is.read(buffer);
targetFile = new File("src/main/resources/targetFile.java");
outStream = new FileOutputStream(targetFile);
outStream.write(buffer);
} catch (MalformedURLException e) {
System.out.println("THE URL IS NOT CORRECT ");
e.printStackTrace();
} catch (IOException e) {
System.out.println("I/O exception");
e.printStackTrace();
}
finally{
if(outStream != null)
outStream.close();
}
}
public class DownloadManager {
static String urls = "[WEBSITE NAME]";
public static void main(String[] args) throws IOException{
URL url = verify(urls);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream in = null;
String filename = url.getFile();
filename = filename.substring(filename.lastIndexOf('/') + 1);
FileOutputStream out = new FileOutputStream("C:\\Java2_programiranje/Network/DownloadTest1/Project/Output" + File.separator + filename);
in = connection.getInputStream();
int read = -1;
byte[] buffer = new byte[4096];
while((read = in.read(buffer)) != -1){
out.write(buffer, 0, read);
System.out.println("[SYSTEM/INFO]: Downloading file...");
}
in.close();
out.close();
System.out.println("[SYSTEM/INFO]: File Downloaded!");
}
private static URL verify(String url){
if(!url.toLowerCase().startsWith("http://")) {
return null;
}
URL verifyUrl = null;
try{
verifyUrl = new URL(url);
}catch(Exception e){
e.printStackTrace();
}
return verifyUrl;
}
}

how to upload & save pdf file on server with javafx? [duplicate]

There is an online file (such as http://www.example.com/information.asp) I need to grab and save to a directory. I know there are several methods for grabbing and reading online files (URLs) line-by-line, but is there a way to just download and save the file using Java?
Give Java NIO a try:
URL website = new URL("http://www.website.com/information.asp");
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream("information.html");
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
Using transferFrom() is potentially much more efficient than a simple loop that reads from the source channel and writes to this channel. Many operating systems can transfer bytes directly from the source channel into the filesystem cache without actually copying them.
Check more about it here.
Note: The third parameter in transferFrom is the maximum number of bytes to transfer. Integer.MAX_VALUE will transfer at most 2^31 bytes, Long.MAX_VALUE will allow at most 2^63 bytes (larger than any file in existence).
Use Apache Commons IO. It is just one line of code:
FileUtils.copyURLToFile(URL, File)
Simpler non-blocking I/O usage:
URL website = new URL("http://www.website.com/information.asp");
try (InputStream in = website.openStream()) {
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING);
}
public void saveUrl(final String filename, final String urlString)
throws MalformedURLException, IOException {
BufferedInputStream in = null;
FileOutputStream fout = null;
try {
in = new BufferedInputStream(new URL(urlString).openStream());
fout = new FileOutputStream(filename);
final byte data[] = new byte[1024];
int count;
while ((count = in.read(data, 0, 1024)) != -1) {
fout.write(data, 0, count);
}
} finally {
if (in != null) {
in.close();
}
if (fout != null) {
fout.close();
}
}
}
You'll need to handle exceptions, probably external to this method.
Here is a concise, readable, JDK-only solution with properly closed resources:
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
Two lines of code and no dependencies.
Here's a complete file downloader example program with output, error checking, and command line argument checks:
package so.downloader;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Application {
public static void main(String[] args) throws IOException {
if (2 != args.length) {
System.out.println("USAGE: java -jar so-downloader.jar <source-URL> <target-filename>");
System.exit(1);
}
String sourceUrl = args[0];
String targetFilename = args[1];
long bytesDownloaded = download(sourceUrl, targetFilename);
System.out.println(String.format("Downloaded %d bytes from %s to %s.", bytesDownloaded, sourceUrl, targetFilename));
}
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
}
As noted in the so-downloader repository README:
To run file download program:
java -jar so-downloader.jar <source-URL> <target-filename>
For example:
java -jar so-downloader.jar https://github.com/JanStureNielsen/so-downloader/archive/main.zip so-downloader-source.zip
Downloading a file requires you to read it. Either way, you will have to go through the file in some way. Instead of line by line, you can just read it by bytes from the stream:
BufferedInputStream in = new BufferedInputStream(new URL("http://www.website.com/information.asp").openStream())
byte data[] = new byte[1024];
int count;
while((count = in.read(data, 0, 1024)) != -1)
{
out.write(data, 0, count);
}
When using Java 7+, use the following method to download a file from the Internet and save it to some directory:
private static Path download(String sourceURL, String targetDirectory) throws IOException
{
URL url = new URL(sourceURL);
String fileName = sourceURL.substring(sourceURL.lastIndexOf('/') + 1, sourceURL.length());
Path targetPath = new File(targetDirectory + File.separator + fileName).toPath();
Files.copy(url.openStream(), targetPath, StandardCopyOption.REPLACE_EXISTING);
return targetPath;
}
Documentation is here.
This answer is almost exactly like the selected answer, but with two enhancements: it's a method and it closes out the FileOutputStream object:
public static void downloadFileFromURL(String urlString, File destination) {
try {
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
fos.close();
rbc.close();
} catch (IOException e) {
e.printStackTrace();
}
}
import java.io.*;
import java.net.*;
public class filedown {
public static void download(String address, String localFileName) {
OutputStream out = null;
URLConnection conn = null;
InputStream in = null;
try {
URL url = new URL(address);
out = new BufferedOutputStream(new FileOutputStream(localFileName));
conn = url.openConnection();
in = conn.getInputStream();
byte[] buffer = new byte[1024];
int numRead;
long numWritten = 0;
while ((numRead = in.read(buffer)) != -1) {
out.write(buffer, 0, numRead);
numWritten += numRead;
}
System.out.println(localFileName + "\t" + numWritten);
}
catch (Exception exception) {
exception.printStackTrace();
}
finally {
try {
if (in != null) {
in.close();
}
if (out != null) {
out.close();
}
}
catch (IOException ioe) {
}
}
}
public static void download(String address) {
int lastSlashIndex = address.lastIndexOf('/');
if (lastSlashIndex >= 0 &&
lastSlashIndex < address.length() - 1) {
download(address, (new URL(address)).getFile());
}
else {
System.err.println("Could not figure out local file name for "+address);
}
}
public static void main(String[] args) {
for (int i = 0; i < args.length; i++) {
download(args[i]);
}
}
}
Personally, I've found Apache's HttpClient to be more than capable of everything I've needed to do with regards to this. Here is a great tutorial on using HttpClient
This is another Java 7 variant based on Brian Risk's answer with usage of a try-with statement:
public static void downloadFileFromURL(String urlString, File destination) throws Throwable {
URL website = new URL(urlString);
try(
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
) {
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
}
}
There are many elegant and efficient answers here. But the conciseness can make us lose some useful information. In particular, one often does not want to consider a connection error an Exception, and one might want to treat differently some kind of network-related errors - for example, to decide if we should retry the download.
Here's a method that does not throw Exceptions for network errors (only for truly exceptional problems, as malformed url or problems writing to the file)
/**
* Downloads from a (http/https) URL and saves to a file.
* Does not consider a connection error an Exception. Instead it returns:
*
* 0=ok
* 1=connection interrupted, timeout (but something was read)
* 2=not found (FileNotFoundException) (404)
* 3=server error (500...)
* 4=could not connect: connection timeout (no internet?) java.net.SocketTimeoutException
* 5=could not connect: (server down?) java.net.ConnectException
* 6=could not resolve host (bad host, or no internet - no dns)
*
* #param file File to write. Parent directory will be created if necessary
* #param url http/https url to connect
* #param secsConnectTimeout Seconds to wait for connection establishment
* #param secsReadTimeout Read timeout in seconds - trasmission will abort if it freezes more than this
* #return See above
* #throws IOException Only if URL is malformed or if could not create the file
*/
public static int saveUrl(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout) throws IOException {
Files.createDirectories(file.getParent()); // make sure parent dir exists , this can throw exception
URLConnection conn = url.openConnection(); // can throw exception if bad url
if( secsConnectTimeout > 0 ) conn.setConnectTimeout(secsConnectTimeout * 1000);
if( secsReadTimeout > 0 ) conn.setReadTimeout(secsReadTimeout * 1000);
int ret = 0;
boolean somethingRead = false;
try (InputStream is = conn.getInputStream()) {
try (BufferedInputStream in = new BufferedInputStream(is); OutputStream fout = Files
.newOutputStream(file)) {
final byte data[] = new byte[8192];
int count;
while((count = in.read(data)) > 0) {
somethingRead = true;
fout.write(data, 0, count);
}
}
} catch(java.io.IOException e) {
int httpcode = 999;
try {
httpcode = ((HttpURLConnection) conn).getResponseCode();
} catch(Exception ee) {}
if( somethingRead && e instanceof java.net.SocketTimeoutException ) ret = 1;
else if( e instanceof FileNotFoundException && httpcode >= 400 && httpcode < 500 ) ret = 2;
else if( httpcode >= 400 && httpcode < 600 ) ret = 3;
else if( e instanceof java.net.SocketTimeoutException ) ret = 4;
else if( e instanceof java.net.ConnectException ) ret = 5;
else if( e instanceof java.net.UnknownHostException ) ret = 6;
else throw e;
}
return ret;
}
It's possible to download the file with with Apache's HttpComponents instead of Commons IO. This code allows you to download a file in Java according to its URL and save it at the specific destination.
public static boolean saveFile(URL fileURL, String fileSavePath) {
boolean isSucceed = true;
CloseableHttpClient httpClient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet(fileURL.toString());
httpGet.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0");
httpGet.addHeader("Referer", "https://www.google.com");
try {
CloseableHttpResponse httpResponse = httpClient.execute(httpGet);
HttpEntity fileEntity = httpResponse.getEntity();
if (fileEntity != null) {
FileUtils.copyInputStreamToFile(fileEntity.getContent(), new File(fileSavePath));
}
} catch (IOException e) {
isSucceed = false;
}
httpGet.releaseConnection();
return isSucceed;
}
In contrast to the single line of code:
FileUtils.copyURLToFile(fileURL, new File(fileSavePath),
URLS_FETCH_TIMEOUT, URLS_FETCH_TIMEOUT);
This code will give you more control over a process and let you specify not only time-outs, but User-Agent and Referer values, which are critical for many websites.
Below is the sample code to download a movie from the Internet with Java code:
URL url = new
URL("http://103.66.178.220/ftp/HDD2/Hindi%20Movies/2018/Hichki%202018.mkv");
BufferedInputStream bufferedInputStream = new BufferedInputStream(url.openStream());
FileOutputStream stream = new FileOutputStream("/home/sachin/Desktop/test.mkv");
int count = 0;
byte[] b1 = new byte[100];
while((count = bufferedInputStream.read(b1)) != -1) {
System.out.println("b1:" + b1 + ">>" + count + ">> KB downloaded:" + new File("/home/sachin/Desktop/test.mkv").length()/1024);
stream.write(b1, 0, count);
}
Solution on java.net.http.HttpClient using Authorization:
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.GET()
.header("Accept", "application/json")
// .header("Authorization", "Basic ci5raG9kemhhZXY6NDdiYdfjlmNUM=") if you need
.uri(URI.create("https://jira.google.ru/secure/attachment/234096/screenshot-1.png"))
.build();
HttpResponse<InputStream> response = client.send(request, HttpResponse.BodyHandlers.ofInputStream());
try (InputStream in = response.body()) {
Files.copy(in, Paths.get(target + "screenshot-1.png"), StandardCopyOption.REPLACE_EXISTING);
}
To summarize (and somehow polish and update) previous answers. The three following methods are practically equivalent. (I added explicit timeouts, because I think they are a must. Nobody wants a download to freeze forever when the connection is lost.)
public static void saveUrl1(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (BufferedInputStream in = new BufferedInputStream(
streamFromUrl(url, secsConnectTimeout,secsReadTimeout));
OutputStream fout = Files.newOutputStream(file)) {
final byte data[] = new byte[8192];
int count;
while((count = in.read(data)) > 0)
fout.write(data, 0, count);
}
}
public static void saveUrl2(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (ReadableByteChannel rbc = Channels.newChannel(
streamFromUrl(url, secsConnectTimeout, secsReadTimeout)
);
FileChannel channel = FileChannel.open(file,
StandardOpenOption.CREATE,
StandardOpenOption.TRUNCATE_EXISTING,
StandardOpenOption.WRITE)
) {
channel.transferFrom(rbc, 0, Long.MAX_VALUE);
}
}
public static void saveUrl3(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (InputStream in = streamFromUrl(url, secsConnectTimeout,secsReadTimeout) ) {
Files.copy(in, file, StandardCopyOption.REPLACE_EXISTING);
}
}
public static InputStream streamFromUrl(URL url,int secsConnectTimeout,int secsReadTimeout) throws IOException {
URLConnection conn = url.openConnection();
if(secsConnectTimeout>0)
conn.setConnectTimeout(secsConnectTimeout*1000);
if(secsReadTimeout>0)
conn.setReadTimeout(secsReadTimeout*1000);
return conn.getInputStream();
}
I don't find significant differences, and all seem right to me. They are safe and efficient. (Differences in speed seem hardly relevant - I write 180 MB from the local server to a SSD disk in times that fluctuate around 1.2 to 1.5 secs). They don't require external libraries. All work with arbitrary sizes and (to my experience) HTTP redirections.
Additionally, all throw FileNotFoundException if the resource is not found (error 404, typically), and java.net.UnknownHostException if the DNS resolution failed; other IOException correspond to errors during transmission.
There is a method, U.fetch(url), in the underscore-java library.
File pom.xml:
<dependency>
<groupId>com.github.javadev</groupId>
<artifactId>underscore</artifactId>
<version>1.84</version>
</dependency>
Code example:
import com.github.underscore.U;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Download {
public static void main(String[] args) throws IOException {
Files.write(Paths.get("data.bin"),
U.fetch("https://stackoverflow.com/questions"
+ "/921262/how-to-download-and-save-a-file-from-internet-using-java").blob());
}
}
You can do this in one line using netloader for Java:
new NetFile(new File("my/zips/1.zip"), "https://example.com/example.zip", -1).load(); // Returns true if succeed, otherwise false.
This can read a file on the Internet and write it into a file.
import java.net.URL;
import java.io.FileOutputStream;
import java.io.File;
public class Download {
public static void main(String[] args) throws Exception {
URL url = new URL("https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png"); // Input URL
FileOutputStream out = new FileOutputStream(new File("out.png")); // Output file
out.write(url.openStream().readAllBytes());
out.close();
}
}
There is an issue with simple usage of:
org.apache.commons.io.FileUtils.copyURLToFile(URL, File)
if you need to download and save very large files, or in general if you need automatic retries in case connection is dropped.
I suggest Apache HttpClient in such cases, along with org.apache.commons.io.FileUtils. For example:
GetMethod method = new GetMethod(resource_url);
try {
int statusCode = client.executeMethod(method);
if (statusCode != HttpStatus.SC_OK) {
logger.error("Get method failed: " + method.getStatusLine());
}
org.apache.commons.io.FileUtils.copyInputStreamToFile(
method.getResponseBodyAsStream(), new File(resource_file));
} catch (HttpException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
method.releaseConnection();
}
First method using the new channel
ReadableByteChannel aq = Channels.newChannel(new url("https//asd/abc.txt").openStream());
FileOutputStream fileOS = new FileOutputStream("C:Users/local/abc.txt")
FileChannel writech = fileOS.getChannel();
Second method using FileUtils
FileUtils.copyURLToFile(new url("https//asd/abc.txt", new local file on system("C":/Users/system/abc.txt"));
Third method using
InputStream xy = new ("https//asd/abc.txt").openStream();
This is how we can download file by using basic Java code and other third-party libraries. These are just for quick reference. Please google with the above keywords to get detailed information and other options.
If you are behind a proxy, you can set the proxies in the Java program as below:
Properties systemSettings = System.getProperties();
systemSettings.put("proxySet", "true");
systemSettings.put("https.proxyHost", "HTTPS proxy of your org");
systemSettings.put("https.proxyPort", "8080");
If you are not behind a proxy, don't include the lines above in your code. Full working code to download a file when you are behind a proxy.
public static void main(String[] args) throws IOException {
String url = "https://raw.githubusercontent.com/bpjoshi/fxservice/master/src/test/java/com/bpjoshi/fxservice/api/TradeControllerTest.java";
OutputStream outStream = null;
URLConnection connection = null;
InputStream is = null;
File targetFile = null;
URL server = null;
// Setting up proxies
Properties systemSettings = System.getProperties();
systemSettings.put("proxySet", "true");
systemSettings.put("https.proxyHost", "HTTPS proxy of my organisation");
systemSettings.put("https.proxyPort", "8080");
// The same way we could also set proxy for HTTP
System.setProperty("java.net.useSystemProxies", "true");
// Code to fetch file
try {
server = new URL(url);
connection = server.openConnection();
is = connection.getInputStream();
byte[] buffer = new byte[is.available()];
is.read(buffer);
targetFile = new File("src/main/resources/targetFile.java");
outStream = new FileOutputStream(targetFile);
outStream.write(buffer);
} catch (MalformedURLException e) {
System.out.println("THE URL IS NOT CORRECT ");
e.printStackTrace();
} catch (IOException e) {
System.out.println("I/O exception");
e.printStackTrace();
}
finally{
if(outStream != null)
outStream.close();
}
}
public class DownloadManager {
static String urls = "[WEBSITE NAME]";
public static void main(String[] args) throws IOException{
URL url = verify(urls);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream in = null;
String filename = url.getFile();
filename = filename.substring(filename.lastIndexOf('/') + 1);
FileOutputStream out = new FileOutputStream("C:\\Java2_programiranje/Network/DownloadTest1/Project/Output" + File.separator + filename);
in = connection.getInputStream();
int read = -1;
byte[] buffer = new byte[4096];
while((read = in.read(buffer)) != -1){
out.write(buffer, 0, read);
System.out.println("[SYSTEM/INFO]: Downloading file...");
}
in.close();
out.close();
System.out.println("[SYSTEM/INFO]: File Downloaded!");
}
private static URL verify(String url){
if(!url.toLowerCase().startsWith("http://")) {
return null;
}
URL verifyUrl = null;
try{
verifyUrl = new URL(url);
}catch(Exception e){
e.printStackTrace();
}
return verifyUrl;
}
}

PlayFramework. How to upload a photo using an external endpoint?

How do I upload a photo using a URL in the playframework?
I was thinking like this:
URL url = new URL("http://www.google.ru/intl/en_com/images/logo_plain.png");
BufferedImage img = ImageIO.read(url);
File newFile = new File("google.png");
ImageIO.write(img, "png", newFile);
But maybe there's another way. In the end I have to get the File and file name.
Example controller:
public static Result uploadPhoto(String urlPhoto){
Url url = new Url(urlPhoto); //doSomething
//get a picture and write to a temporary file
File tempPhoto = myUploadPhoto;
uploadFile(tempPhoto); // Here we make a copy of the file and save it to the file system.
return ok('something');
}
To get that photo you can use The play WS API, the code behind is an example extracted from the play docs in the section Processing large responses, I recommend you to read the full docs here
final Promise<File> filePromise = WS.url(url).get().map(
new Function<WSResponse, File>() {
public File apply(WSResponse response) throws Throwable {
InputStream inputStream = null;
OutputStream outputStream = null;
try {
inputStream = response.getBodyAsStream();
// write the inputStream to a File
final File file = new File("/tmp/response.txt");
outputStream = new FileOutputStream(file);
int read = 0;
byte[] buffer = new byte[1024];
while ((read = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, read);
}
return file;
} catch (IOException e) {
throw e;
} finally {
if (inputStream != null) {inputStream.close();}
if (outputStream != null) {outputStream.close();}
}
}
}
);
Where url is :
String url = "http://www.google.ru/intl/en_com/images/logo_plain.png"
This is as suggested in play documentation for large files:
*
When you are downloading a large file or document, WS allows you to
get the response body as an InputStream so you can process the data
without loading the entire content into memory at once.
*
Pretty much the same as the above answer then some...
Route: POST /testFile 'location of your controller goes here'
Request body content: {"url":"http://www.google.ru/intl/en_com/images/logo_plain.png"}
Controller(using code from JavaWS Processing large responses):
public static Promise<Result> saveFile() {
//you send the url in the request body in order to avoid complications with encoding
final JsonNode body = request().body().asJson();
// use new URL() to validate... not including it for brevity
final String url = body.get("url").asText();
//this one's copy/paste from Play Framework's docs
final Promise<File> filePromise = WS.url(url).get().map(response -> {
InputStream inputStream = null;
OutputStream outputStream = null;
try {
inputStream = response.getBodyAsStream();
final File file = new File("/temp/image");
outputStream = new FileOutputStream(file);
int read = 0;
byte[] buffer = new byte[1024];
while ((read = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, read);
}
return file;
} catch (IOException e) {
throw e;
} finally {
if (inputStream != null) {
inputStream.close();
}
if (outputStream != null) {
outputStream.close();
}
}
}); // copy/paste ended
return filePromise.map(file -> (Result) ok(file.getName() + " saved!")).recover(
t -> (Result) internalServerError("error -> " + t.getMessage()));
}
And that's it...
In order to serve the file after the upload phase you can use this answer(I swear I'm not promoting myself...): static asset serving from absolute path in play framework 2.3.x

Problem in response from servlet output stream

In my Java Based Web application, I am trying to write some files in a ZIP file and I want to prompt the user to download/cancel/save. The time when the download dialog box opens and if I click on cancel, after then if I try to access any links in my application then the dialog box opens again. Here's my code snippet.
private void sendResponse(byte[] buf, File tempFile) throws IOException {
long length = tempFile.length();
HttpServletResponse response = (HttpServletResponse) context.getExternalContext().getResponse();
String disposition = "attachment; fileName=search_download.zip";
ServletOutputStream servletOutputStream = null;
InputStream in = null;
try {
if (buf != null) {
in = new BufferedInputStream(new FileInputStream(tempFile));
servletOutputStream = response.getOutputStream();
response.setContentType("application/zip");
response.setHeader("Content-Disposition", disposition);
while ((in != null) && ((length = in.read(buf)) != -1)) {
servletOutputStream.write(buf, 0, (int) length);
}
}
} finally {
if (servletOutputStream != null) {
servletOutputStream.close();
}
if (in != null) {
in.close();
}
if (tempFile != null) {
tempFile.delete();
}
}
context.responseComplete();
}
Also once I click on save/Open, its working as expected. I hope the problem in clearing the response object. Please help me in providing some solutions.
EDIT
downloadSelected Method
public void downloadSelected() throws IOException {
List<NodeRef> list = init();
StringBuffer errors = new StringBuffer("");
ZipOutputStream out = null;
File tempFile = null;
byte[] buf = null;
try {
if (list != null && list.size() > 0) {
tempFile = TempFileProvider.createTempFile(TEMP_FILE_PREFIX, TEMP_FILE_SUFFIX_ZIP);
out = new ZipOutputStream(new BufferedOutputStream(new FileOutputStream(tempFile)));
buf = writeIntoZip(list,out);
sendResponse(buf,tempFile);
} else {
errors.append("No Items Selected for Download");
this.errorMessage = errors.toString();
}
}
catch(IOException e) {
System.out.println("Cancelled");
}
}
Write into Zip method:
private byte[] writeIntoZip(List<NodeRef> list,ZipOutputStream out) throws IOException {
String downloadUrl = "";
InputStream bis = null;
Node node = null;
String nodeName = "";
byte[] buf = null;
Map<String,Integer> contents = new HashMap<String, Integer>();
ContentReader reader = null;
for (NodeRef nodeRef : list) {
try {
node = new Node(nodeRef);
nodeName = node.getName();
reader = Repository.getServiceRegistry(FacesContext.getCurrentInstance()).getContentService().getReader(nodeRef, ContentModel.PROP_CONTENT);
bis = new BufferedInputStream(reader.getContentInputStream());
if (bis != null) {
contents = setFiles(contents,nodeName);
nodeName = getUniqueFileName(contents, nodeName);
buf = new byte[4 * 1024];
buf = writeOutputStream(bis).toByteArray();
out.putNextEntry(new ZipEntry(nodeName));
out.write(buf);
}
} catch (Exception e) {
e.printStackTrace();
if(out != null) {
out.close();
}
}
}
if(out != null) {
out.close();
}
return buf;
}
Thanks,
Jeya
I'm admittedly not sure of the root cause of this problem. This behaviour is not explainable based on the code posted as far. An SSCCE would definitely help more. But I spot several potential causes of this problem. Perhaps fixing one or all of them will fix the concrete problem.
You've assigned JSF's FacesContext as a property of the bean. This is bad and definitely if it's a bean with a broader scope than the request scope. It should always be obtained inside the local method scope by FacesContext#getCurrentInstance(). It's returns namely a thread local variable and should never be shared among other requests. Perhaps you've put the bean in the session scope and a dangling response object of the previous request with the headers already set will be reused.
You are not catching the IOException on close() methods. If the client cancels the download, then the servletOutputStream.close() will throw an IOException indicating that the client has aborted the response. In your case, the in won't be closed anymore and the tempFile won't be deleted anymore and the JSF response won't be completed anymore. You should also catch the close() and log/ignore the exception so that you can ensure that the finally finishes its job. Perhaps the presence of tempFile has consequences for your future POST actions.
You are using <h:commandLink> instead of <h:outputLink> or <h:link> or plain <a> for page-to-page navigation. This uses POST instead of GET which is bad for user experience and SEO. You should use GET links instead of POST links. See also When should I use h:outputLink instead of h:commandLink?

Categories

Resources