Java - XML Parser & Downloader - java

I am creating a application that will download a lot of files from my own web server. But idk why it is not working. It dont have any response..
Here is some part of my code
Downloader.class
private Proxy proxy = Proxy.NO_PROXY;
public void downloadLibrary()
{
System.out.println("Start downloading libraries from server...");
try
{
URL resourceUrl = new URL("http://www.example.com/libraries.xml");
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(resourceUrl.openConnection(proxy).getInputStream());
NodeList nodeLst = doc.getElementsByTagName("Contents");
for (int i = 0; i < nodeLst.getLength(); i++)
{
Node node = nodeLst.item(i);
if (node.getNodeType() == 1)
{
Element element = (Element)node;
String key = element.getElementsByTagName("Key").item(0).getChildNodes().item(0).getNodeValue();
File f = new File(launcher.getWorkingDirectory(), key);
downloadFile("http://www.example.com/" + key, f, "libraries");
}
}
}
catch(Exception e)
{
System.out.println("Error was found when trying to download libraries file " + e);
}
}
public void downloadFile(final String url, final File path, final String fileName)
{
SwingWorker<Void, Void> worker = new SwingWorker<Void, Void>()
{
#Override
protected Void doInBackground() throws Exception
{
launcher.println("Downloading file " + fileName + "...");
try
{
URL fileURL = new URL(url);
ReadableByteChannel rbc = Channels.newChannel(fileURL.openStream());
FileOutputStream fos = new FileOutputStream(path);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
}
catch(Exception e)
{
System.out.println("Cannot download file : " + fileName + " " + e);
}
return null;
}
#Override
public void done()
{
System.out.println(fileName + " had downloaded sucessfully");
}
};
worker.execute();
}
Here is some part of my xml file(libraries.xml)
<Key>libraries/org/lwjgl/lwjgl/lwjgl/2.9.0/lwjgl-2.9.0.jar</Key>
My idea is, my application will read the XML file. Then it will download file from the server and save to computer. For example, my application download http://www.example.com/libraries/org/lwjgl/lwjgl/lwjgl/2.9.0/lwjgl-2.9.0.jar then it will save to C://WorkingDir/libraries/org/lwjgl/lwjgl/lwjgl/2.9.0/lwjgl-2.9.0.jar
There is ton of <Key></Key> in my XML file and i have to download all of it.
Is that any code wrong? Thanks for helping.

Try consuming the connection directly through a reader of some sort to a String first, then you can manipulate that however you need.
package come.somecompany.somepackage.utils;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
public class WebUtils {
/**
* Gets the HTML value of a website and
* returns as a string value.
*
* #param website website url to get.
* #param ssl True if website is SSL.
* #param useragent Specified User-Agent (empty string "" means use system default).
* #return String value of website.
*/
public String getHTML(String website, boolean ssl, String useragent) {
String html = "";
String temp;
String prefix;
if (ssl) {
prefix = "https://";
} else {
prefix = "http://";
}
try {
URL url = new URL(prefix + website);
URLConnection con = url.openConnection();
if (!(useragent.equalsIgnoreCase(""))) {
con.setRequestProperty("User-Agent", useragent);
}
BufferedReader in = new BufferedReader(
new InputStreamReader(con.getInputStream()));
while((temp = in.readLine()) != null) {
html += temp + "\n";
}
in.close();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return html;
}
}
Also, it sort of looks like you are attempting to parse HTML using an XML pattern... which may give you difficulties. You could try out JSoup - it's a java HTML parser, and works pretty well and is easy: http://jsoup.org/
It may help with consuming documents from your website without needing to build your own downloader too.
UPDATE --
Try reading into a BufferedReader, perhaps your program isn't receiving the full document, buffered reader may help.
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
So with your first method, something like:
public void downloadLibrary()
{
System.out.println("Start downloading libraries from server...");
try
{
URL resourceUrl = new URL("http://www.example.com/libraries.xml");
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
// change here
URLConnection con = resourceUrl.openConnection();
BufferedReader bfr = new BufferedReader(new InputStreamReader(con.getInputStream()));
String tempDoc = "";
String tempStr;
while (tempStr = bfr.readLine()) != null) {
tempDoc += tempStr + System.getProperty("line.separator");
}
Document doc = db.parse(tempDoc);
NodeList nodeLst = doc.getElementsByTagName("Contents");
for (int i = 0; i < nodeLst.getLength(); i++)
{
Node node = nodeLst.item(i);
if (node.getNodeType() == 1)
{
Element element = (Element)node;
String key = element.getElementsByTagName("Key").item(0).getChildNodes().item(0).getNodeValue();
File f = new File(launcher.getWorkingDirectory(), key);
downloadFile("http://www.example.com/" + key, f, "libraries");
}
}
}
catch(Exception e)
{
System.out.println("Error was found when trying to download libraries file " + e);
}
}

Related

How to download a file from Internet using Java, with URL query parameters [duplicate]

There is an online file (such as http://www.example.com/information.asp) I need to grab and save to a directory. I know there are several methods for grabbing and reading online files (URLs) line-by-line, but is there a way to just download and save the file using Java?
Give Java NIO a try:
URL website = new URL("http://www.website.com/information.asp");
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream("information.html");
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
Using transferFrom() is potentially much more efficient than a simple loop that reads from the source channel and writes to this channel. Many operating systems can transfer bytes directly from the source channel into the filesystem cache without actually copying them.
Check more about it here.
Note: The third parameter in transferFrom is the maximum number of bytes to transfer. Integer.MAX_VALUE will transfer at most 2^31 bytes, Long.MAX_VALUE will allow at most 2^63 bytes (larger than any file in existence).
Use Apache Commons IO. It is just one line of code:
FileUtils.copyURLToFile(URL, File)
Simpler non-blocking I/O usage:
URL website = new URL("http://www.website.com/information.asp");
try (InputStream in = website.openStream()) {
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING);
}
public void saveUrl(final String filename, final String urlString)
throws MalformedURLException, IOException {
BufferedInputStream in = null;
FileOutputStream fout = null;
try {
in = new BufferedInputStream(new URL(urlString).openStream());
fout = new FileOutputStream(filename);
final byte data[] = new byte[1024];
int count;
while ((count = in.read(data, 0, 1024)) != -1) {
fout.write(data, 0, count);
}
} finally {
if (in != null) {
in.close();
}
if (fout != null) {
fout.close();
}
}
}
You'll need to handle exceptions, probably external to this method.
Here is a concise, readable, JDK-only solution with properly closed resources:
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
Two lines of code and no dependencies.
Here's a complete file downloader example program with output, error checking, and command line argument checks:
package so.downloader;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Application {
public static void main(String[] args) throws IOException {
if (2 != args.length) {
System.out.println("USAGE: java -jar so-downloader.jar <source-URL> <target-filename>");
System.exit(1);
}
String sourceUrl = args[0];
String targetFilename = args[1];
long bytesDownloaded = download(sourceUrl, targetFilename);
System.out.println(String.format("Downloaded %d bytes from %s to %s.", bytesDownloaded, sourceUrl, targetFilename));
}
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
}
As noted in the so-downloader repository README:
To run file download program:
java -jar so-downloader.jar <source-URL> <target-filename>
For example:
java -jar so-downloader.jar https://github.com/JanStureNielsen/so-downloader/archive/main.zip so-downloader-source.zip
Downloading a file requires you to read it. Either way, you will have to go through the file in some way. Instead of line by line, you can just read it by bytes from the stream:
BufferedInputStream in = new BufferedInputStream(new URL("http://www.website.com/information.asp").openStream())
byte data[] = new byte[1024];
int count;
while((count = in.read(data, 0, 1024)) != -1)
{
out.write(data, 0, count);
}
When using Java 7+, use the following method to download a file from the Internet and save it to some directory:
private static Path download(String sourceURL, String targetDirectory) throws IOException
{
URL url = new URL(sourceURL);
String fileName = sourceURL.substring(sourceURL.lastIndexOf('/') + 1, sourceURL.length());
Path targetPath = new File(targetDirectory + File.separator + fileName).toPath();
Files.copy(url.openStream(), targetPath, StandardCopyOption.REPLACE_EXISTING);
return targetPath;
}
Documentation is here.
This answer is almost exactly like the selected answer, but with two enhancements: it's a method and it closes out the FileOutputStream object:
public static void downloadFileFromURL(String urlString, File destination) {
try {
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
fos.close();
rbc.close();
} catch (IOException e) {
e.printStackTrace();
}
}
import java.io.*;
import java.net.*;
public class filedown {
public static void download(String address, String localFileName) {
OutputStream out = null;
URLConnection conn = null;
InputStream in = null;
try {
URL url = new URL(address);
out = new BufferedOutputStream(new FileOutputStream(localFileName));
conn = url.openConnection();
in = conn.getInputStream();
byte[] buffer = new byte[1024];
int numRead;
long numWritten = 0;
while ((numRead = in.read(buffer)) != -1) {
out.write(buffer, 0, numRead);
numWritten += numRead;
}
System.out.println(localFileName + "\t" + numWritten);
}
catch (Exception exception) {
exception.printStackTrace();
}
finally {
try {
if (in != null) {
in.close();
}
if (out != null) {
out.close();
}
}
catch (IOException ioe) {
}
}
}
public static void download(String address) {
int lastSlashIndex = address.lastIndexOf('/');
if (lastSlashIndex >= 0 &&
lastSlashIndex < address.length() - 1) {
download(address, (new URL(address)).getFile());
}
else {
System.err.println("Could not figure out local file name for "+address);
}
}
public static void main(String[] args) {
for (int i = 0; i < args.length; i++) {
download(args[i]);
}
}
}
Personally, I've found Apache's HttpClient to be more than capable of everything I've needed to do with regards to this. Here is a great tutorial on using HttpClient
This is another Java 7 variant based on Brian Risk's answer with usage of a try-with statement:
public static void downloadFileFromURL(String urlString, File destination) throws Throwable {
URL website = new URL(urlString);
try(
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
) {
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
}
}
There are many elegant and efficient answers here. But the conciseness can make us lose some useful information. In particular, one often does not want to consider a connection error an Exception, and one might want to treat differently some kind of network-related errors - for example, to decide if we should retry the download.
Here's a method that does not throw Exceptions for network errors (only for truly exceptional problems, as malformed url or problems writing to the file)
/**
* Downloads from a (http/https) URL and saves to a file.
* Does not consider a connection error an Exception. Instead it returns:
*
* 0=ok
* 1=connection interrupted, timeout (but something was read)
* 2=not found (FileNotFoundException) (404)
* 3=server error (500...)
* 4=could not connect: connection timeout (no internet?) java.net.SocketTimeoutException
* 5=could not connect: (server down?) java.net.ConnectException
* 6=could not resolve host (bad host, or no internet - no dns)
*
* #param file File to write. Parent directory will be created if necessary
* #param url http/https url to connect
* #param secsConnectTimeout Seconds to wait for connection establishment
* #param secsReadTimeout Read timeout in seconds - trasmission will abort if it freezes more than this
* #return See above
* #throws IOException Only if URL is malformed or if could not create the file
*/
public static int saveUrl(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout) throws IOException {
Files.createDirectories(file.getParent()); // make sure parent dir exists , this can throw exception
URLConnection conn = url.openConnection(); // can throw exception if bad url
if( secsConnectTimeout > 0 ) conn.setConnectTimeout(secsConnectTimeout * 1000);
if( secsReadTimeout > 0 ) conn.setReadTimeout(secsReadTimeout * 1000);
int ret = 0;
boolean somethingRead = false;
try (InputStream is = conn.getInputStream()) {
try (BufferedInputStream in = new BufferedInputStream(is); OutputStream fout = Files
.newOutputStream(file)) {
final byte data[] = new byte[8192];
int count;
while((count = in.read(data)) > 0) {
somethingRead = true;
fout.write(data, 0, count);
}
}
} catch(java.io.IOException e) {
int httpcode = 999;
try {
httpcode = ((HttpURLConnection) conn).getResponseCode();
} catch(Exception ee) {}
if( somethingRead && e instanceof java.net.SocketTimeoutException ) ret = 1;
else if( e instanceof FileNotFoundException && httpcode >= 400 && httpcode < 500 ) ret = 2;
else if( httpcode >= 400 && httpcode < 600 ) ret = 3;
else if( e instanceof java.net.SocketTimeoutException ) ret = 4;
else if( e instanceof java.net.ConnectException ) ret = 5;
else if( e instanceof java.net.UnknownHostException ) ret = 6;
else throw e;
}
return ret;
}
It's possible to download the file with with Apache's HttpComponents instead of Commons IO. This code allows you to download a file in Java according to its URL and save it at the specific destination.
public static boolean saveFile(URL fileURL, String fileSavePath) {
boolean isSucceed = true;
CloseableHttpClient httpClient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet(fileURL.toString());
httpGet.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0");
httpGet.addHeader("Referer", "https://www.google.com");
try {
CloseableHttpResponse httpResponse = httpClient.execute(httpGet);
HttpEntity fileEntity = httpResponse.getEntity();
if (fileEntity != null) {
FileUtils.copyInputStreamToFile(fileEntity.getContent(), new File(fileSavePath));
}
} catch (IOException e) {
isSucceed = false;
}
httpGet.releaseConnection();
return isSucceed;
}
In contrast to the single line of code:
FileUtils.copyURLToFile(fileURL, new File(fileSavePath),
URLS_FETCH_TIMEOUT, URLS_FETCH_TIMEOUT);
This code will give you more control over a process and let you specify not only time-outs, but User-Agent and Referer values, which are critical for many websites.
Below is the sample code to download a movie from the Internet with Java code:
URL url = new
URL("http://103.66.178.220/ftp/HDD2/Hindi%20Movies/2018/Hichki%202018.mkv");
BufferedInputStream bufferedInputStream = new BufferedInputStream(url.openStream());
FileOutputStream stream = new FileOutputStream("/home/sachin/Desktop/test.mkv");
int count = 0;
byte[] b1 = new byte[100];
while((count = bufferedInputStream.read(b1)) != -1) {
System.out.println("b1:" + b1 + ">>" + count + ">> KB downloaded:" + new File("/home/sachin/Desktop/test.mkv").length()/1024);
stream.write(b1, 0, count);
}
Solution on java.net.http.HttpClient using Authorization:
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.GET()
.header("Accept", "application/json")
// .header("Authorization", "Basic ci5raG9kemhhZXY6NDdiYdfjlmNUM=") if you need
.uri(URI.create("https://jira.google.ru/secure/attachment/234096/screenshot-1.png"))
.build();
HttpResponse<InputStream> response = client.send(request, HttpResponse.BodyHandlers.ofInputStream());
try (InputStream in = response.body()) {
Files.copy(in, Paths.get(target + "screenshot-1.png"), StandardCopyOption.REPLACE_EXISTING);
}
To summarize (and somehow polish and update) previous answers. The three following methods are practically equivalent. (I added explicit timeouts, because I think they are a must. Nobody wants a download to freeze forever when the connection is lost.)
public static void saveUrl1(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (BufferedInputStream in = new BufferedInputStream(
streamFromUrl(url, secsConnectTimeout,secsReadTimeout));
OutputStream fout = Files.newOutputStream(file)) {
final byte data[] = new byte[8192];
int count;
while((count = in.read(data)) > 0)
fout.write(data, 0, count);
}
}
public static void saveUrl2(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (ReadableByteChannel rbc = Channels.newChannel(
streamFromUrl(url, secsConnectTimeout, secsReadTimeout)
);
FileChannel channel = FileChannel.open(file,
StandardOpenOption.CREATE,
StandardOpenOption.TRUNCATE_EXISTING,
StandardOpenOption.WRITE)
) {
channel.transferFrom(rbc, 0, Long.MAX_VALUE);
}
}
public static void saveUrl3(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (InputStream in = streamFromUrl(url, secsConnectTimeout,secsReadTimeout) ) {
Files.copy(in, file, StandardCopyOption.REPLACE_EXISTING);
}
}
public static InputStream streamFromUrl(URL url,int secsConnectTimeout,int secsReadTimeout) throws IOException {
URLConnection conn = url.openConnection();
if(secsConnectTimeout>0)
conn.setConnectTimeout(secsConnectTimeout*1000);
if(secsReadTimeout>0)
conn.setReadTimeout(secsReadTimeout*1000);
return conn.getInputStream();
}
I don't find significant differences, and all seem right to me. They are safe and efficient. (Differences in speed seem hardly relevant - I write 180 MB from the local server to a SSD disk in times that fluctuate around 1.2 to 1.5 secs). They don't require external libraries. All work with arbitrary sizes and (to my experience) HTTP redirections.
Additionally, all throw FileNotFoundException if the resource is not found (error 404, typically), and java.net.UnknownHostException if the DNS resolution failed; other IOException correspond to errors during transmission.
There is a method, U.fetch(url), in the underscore-java library.
File pom.xml:
<dependency>
<groupId>com.github.javadev</groupId>
<artifactId>underscore</artifactId>
<version>1.84</version>
</dependency>
Code example:
import com.github.underscore.U;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Download {
public static void main(String[] args) throws IOException {
Files.write(Paths.get("data.bin"),
U.fetch("https://stackoverflow.com/questions"
+ "/921262/how-to-download-and-save-a-file-from-internet-using-java").blob());
}
}
You can do this in one line using netloader for Java:
new NetFile(new File("my/zips/1.zip"), "https://example.com/example.zip", -1).load(); // Returns true if succeed, otherwise false.
This can read a file on the Internet and write it into a file.
import java.net.URL;
import java.io.FileOutputStream;
import java.io.File;
public class Download {
public static void main(String[] args) throws Exception {
URL url = new URL("https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png"); // Input URL
FileOutputStream out = new FileOutputStream(new File("out.png")); // Output file
out.write(url.openStream().readAllBytes());
out.close();
}
}
There is an issue with simple usage of:
org.apache.commons.io.FileUtils.copyURLToFile(URL, File)
if you need to download and save very large files, or in general if you need automatic retries in case connection is dropped.
I suggest Apache HttpClient in such cases, along with org.apache.commons.io.FileUtils. For example:
GetMethod method = new GetMethod(resource_url);
try {
int statusCode = client.executeMethod(method);
if (statusCode != HttpStatus.SC_OK) {
logger.error("Get method failed: " + method.getStatusLine());
}
org.apache.commons.io.FileUtils.copyInputStreamToFile(
method.getResponseBodyAsStream(), new File(resource_file));
} catch (HttpException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
method.releaseConnection();
}
First method using the new channel
ReadableByteChannel aq = Channels.newChannel(new url("https//asd/abc.txt").openStream());
FileOutputStream fileOS = new FileOutputStream("C:Users/local/abc.txt")
FileChannel writech = fileOS.getChannel();
Second method using FileUtils
FileUtils.copyURLToFile(new url("https//asd/abc.txt", new local file on system("C":/Users/system/abc.txt"));
Third method using
InputStream xy = new ("https//asd/abc.txt").openStream();
This is how we can download file by using basic Java code and other third-party libraries. These are just for quick reference. Please google with the above keywords to get detailed information and other options.
If you are behind a proxy, you can set the proxies in the Java program as below:
Properties systemSettings = System.getProperties();
systemSettings.put("proxySet", "true");
systemSettings.put("https.proxyHost", "HTTPS proxy of your org");
systemSettings.put("https.proxyPort", "8080");
If you are not behind a proxy, don't include the lines above in your code. Full working code to download a file when you are behind a proxy.
public static void main(String[] args) throws IOException {
String url = "https://raw.githubusercontent.com/bpjoshi/fxservice/master/src/test/java/com/bpjoshi/fxservice/api/TradeControllerTest.java";
OutputStream outStream = null;
URLConnection connection = null;
InputStream is = null;
File targetFile = null;
URL server = null;
// Setting up proxies
Properties systemSettings = System.getProperties();
systemSettings.put("proxySet", "true");
systemSettings.put("https.proxyHost", "HTTPS proxy of my organisation");
systemSettings.put("https.proxyPort", "8080");
// The same way we could also set proxy for HTTP
System.setProperty("java.net.useSystemProxies", "true");
// Code to fetch file
try {
server = new URL(url);
connection = server.openConnection();
is = connection.getInputStream();
byte[] buffer = new byte[is.available()];
is.read(buffer);
targetFile = new File("src/main/resources/targetFile.java");
outStream = new FileOutputStream(targetFile);
outStream.write(buffer);
} catch (MalformedURLException e) {
System.out.println("THE URL IS NOT CORRECT ");
e.printStackTrace();
} catch (IOException e) {
System.out.println("I/O exception");
e.printStackTrace();
}
finally{
if(outStream != null)
outStream.close();
}
}
public class DownloadManager {
static String urls = "[WEBSITE NAME]";
public static void main(String[] args) throws IOException{
URL url = verify(urls);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream in = null;
String filename = url.getFile();
filename = filename.substring(filename.lastIndexOf('/') + 1);
FileOutputStream out = new FileOutputStream("C:\\Java2_programiranje/Network/DownloadTest1/Project/Output" + File.separator + filename);
in = connection.getInputStream();
int read = -1;
byte[] buffer = new byte[4096];
while((read = in.read(buffer)) != -1){
out.write(buffer, 0, read);
System.out.println("[SYSTEM/INFO]: Downloading file...");
}
in.close();
out.close();
System.out.println("[SYSTEM/INFO]: File Downloaded!");
}
private static URL verify(String url){
if(!url.toLowerCase().startsWith("http://")) {
return null;
}
URL verifyUrl = null;
try{
verifyUrl = new URL(url);
}catch(Exception e){
e.printStackTrace();
}
return verifyUrl;
}
}

how to upload & save pdf file on server with javafx? [duplicate]

There is an online file (such as http://www.example.com/information.asp) I need to grab and save to a directory. I know there are several methods for grabbing and reading online files (URLs) line-by-line, but is there a way to just download and save the file using Java?
Give Java NIO a try:
URL website = new URL("http://www.website.com/information.asp");
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream("information.html");
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
Using transferFrom() is potentially much more efficient than a simple loop that reads from the source channel and writes to this channel. Many operating systems can transfer bytes directly from the source channel into the filesystem cache without actually copying them.
Check more about it here.
Note: The third parameter in transferFrom is the maximum number of bytes to transfer. Integer.MAX_VALUE will transfer at most 2^31 bytes, Long.MAX_VALUE will allow at most 2^63 bytes (larger than any file in existence).
Use Apache Commons IO. It is just one line of code:
FileUtils.copyURLToFile(URL, File)
Simpler non-blocking I/O usage:
URL website = new URL("http://www.website.com/information.asp");
try (InputStream in = website.openStream()) {
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING);
}
public void saveUrl(final String filename, final String urlString)
throws MalformedURLException, IOException {
BufferedInputStream in = null;
FileOutputStream fout = null;
try {
in = new BufferedInputStream(new URL(urlString).openStream());
fout = new FileOutputStream(filename);
final byte data[] = new byte[1024];
int count;
while ((count = in.read(data, 0, 1024)) != -1) {
fout.write(data, 0, count);
}
} finally {
if (in != null) {
in.close();
}
if (fout != null) {
fout.close();
}
}
}
You'll need to handle exceptions, probably external to this method.
Here is a concise, readable, JDK-only solution with properly closed resources:
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
Two lines of code and no dependencies.
Here's a complete file downloader example program with output, error checking, and command line argument checks:
package so.downloader;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Application {
public static void main(String[] args) throws IOException {
if (2 != args.length) {
System.out.println("USAGE: java -jar so-downloader.jar <source-URL> <target-filename>");
System.exit(1);
}
String sourceUrl = args[0];
String targetFilename = args[1];
long bytesDownloaded = download(sourceUrl, targetFilename);
System.out.println(String.format("Downloaded %d bytes from %s to %s.", bytesDownloaded, sourceUrl, targetFilename));
}
static long download(String url, String fileName) throws IOException {
try (InputStream in = URI.create(url).toURL().openStream()) {
return Files.copy(in, Paths.get(fileName));
}
}
}
As noted in the so-downloader repository README:
To run file download program:
java -jar so-downloader.jar <source-URL> <target-filename>
For example:
java -jar so-downloader.jar https://github.com/JanStureNielsen/so-downloader/archive/main.zip so-downloader-source.zip
Downloading a file requires you to read it. Either way, you will have to go through the file in some way. Instead of line by line, you can just read it by bytes from the stream:
BufferedInputStream in = new BufferedInputStream(new URL("http://www.website.com/information.asp").openStream())
byte data[] = new byte[1024];
int count;
while((count = in.read(data, 0, 1024)) != -1)
{
out.write(data, 0, count);
}
When using Java 7+, use the following method to download a file from the Internet and save it to some directory:
private static Path download(String sourceURL, String targetDirectory) throws IOException
{
URL url = new URL(sourceURL);
String fileName = sourceURL.substring(sourceURL.lastIndexOf('/') + 1, sourceURL.length());
Path targetPath = new File(targetDirectory + File.separator + fileName).toPath();
Files.copy(url.openStream(), targetPath, StandardCopyOption.REPLACE_EXISTING);
return targetPath;
}
Documentation is here.
This answer is almost exactly like the selected answer, but with two enhancements: it's a method and it closes out the FileOutputStream object:
public static void downloadFileFromURL(String urlString, File destination) {
try {
URL website = new URL(urlString);
ReadableByteChannel rbc;
rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
fos.close();
rbc.close();
} catch (IOException e) {
e.printStackTrace();
}
}
import java.io.*;
import java.net.*;
public class filedown {
public static void download(String address, String localFileName) {
OutputStream out = null;
URLConnection conn = null;
InputStream in = null;
try {
URL url = new URL(address);
out = new BufferedOutputStream(new FileOutputStream(localFileName));
conn = url.openConnection();
in = conn.getInputStream();
byte[] buffer = new byte[1024];
int numRead;
long numWritten = 0;
while ((numRead = in.read(buffer)) != -1) {
out.write(buffer, 0, numRead);
numWritten += numRead;
}
System.out.println(localFileName + "\t" + numWritten);
}
catch (Exception exception) {
exception.printStackTrace();
}
finally {
try {
if (in != null) {
in.close();
}
if (out != null) {
out.close();
}
}
catch (IOException ioe) {
}
}
}
public static void download(String address) {
int lastSlashIndex = address.lastIndexOf('/');
if (lastSlashIndex >= 0 &&
lastSlashIndex < address.length() - 1) {
download(address, (new URL(address)).getFile());
}
else {
System.err.println("Could not figure out local file name for "+address);
}
}
public static void main(String[] args) {
for (int i = 0; i < args.length; i++) {
download(args[i]);
}
}
}
Personally, I've found Apache's HttpClient to be more than capable of everything I've needed to do with regards to this. Here is a great tutorial on using HttpClient
This is another Java 7 variant based on Brian Risk's answer with usage of a try-with statement:
public static void downloadFileFromURL(String urlString, File destination) throws Throwable {
URL website = new URL(urlString);
try(
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream(destination);
) {
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
}
}
There are many elegant and efficient answers here. But the conciseness can make us lose some useful information. In particular, one often does not want to consider a connection error an Exception, and one might want to treat differently some kind of network-related errors - for example, to decide if we should retry the download.
Here's a method that does not throw Exceptions for network errors (only for truly exceptional problems, as malformed url or problems writing to the file)
/**
* Downloads from a (http/https) URL and saves to a file.
* Does not consider a connection error an Exception. Instead it returns:
*
* 0=ok
* 1=connection interrupted, timeout (but something was read)
* 2=not found (FileNotFoundException) (404)
* 3=server error (500...)
* 4=could not connect: connection timeout (no internet?) java.net.SocketTimeoutException
* 5=could not connect: (server down?) java.net.ConnectException
* 6=could not resolve host (bad host, or no internet - no dns)
*
* #param file File to write. Parent directory will be created if necessary
* #param url http/https url to connect
* #param secsConnectTimeout Seconds to wait for connection establishment
* #param secsReadTimeout Read timeout in seconds - trasmission will abort if it freezes more than this
* #return See above
* #throws IOException Only if URL is malformed or if could not create the file
*/
public static int saveUrl(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout) throws IOException {
Files.createDirectories(file.getParent()); // make sure parent dir exists , this can throw exception
URLConnection conn = url.openConnection(); // can throw exception if bad url
if( secsConnectTimeout > 0 ) conn.setConnectTimeout(secsConnectTimeout * 1000);
if( secsReadTimeout > 0 ) conn.setReadTimeout(secsReadTimeout * 1000);
int ret = 0;
boolean somethingRead = false;
try (InputStream is = conn.getInputStream()) {
try (BufferedInputStream in = new BufferedInputStream(is); OutputStream fout = Files
.newOutputStream(file)) {
final byte data[] = new byte[8192];
int count;
while((count = in.read(data)) > 0) {
somethingRead = true;
fout.write(data, 0, count);
}
}
} catch(java.io.IOException e) {
int httpcode = 999;
try {
httpcode = ((HttpURLConnection) conn).getResponseCode();
} catch(Exception ee) {}
if( somethingRead && e instanceof java.net.SocketTimeoutException ) ret = 1;
else if( e instanceof FileNotFoundException && httpcode >= 400 && httpcode < 500 ) ret = 2;
else if( httpcode >= 400 && httpcode < 600 ) ret = 3;
else if( e instanceof java.net.SocketTimeoutException ) ret = 4;
else if( e instanceof java.net.ConnectException ) ret = 5;
else if( e instanceof java.net.UnknownHostException ) ret = 6;
else throw e;
}
return ret;
}
It's possible to download the file with with Apache's HttpComponents instead of Commons IO. This code allows you to download a file in Java according to its URL and save it at the specific destination.
public static boolean saveFile(URL fileURL, String fileSavePath) {
boolean isSucceed = true;
CloseableHttpClient httpClient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet(fileURL.toString());
httpGet.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0");
httpGet.addHeader("Referer", "https://www.google.com");
try {
CloseableHttpResponse httpResponse = httpClient.execute(httpGet);
HttpEntity fileEntity = httpResponse.getEntity();
if (fileEntity != null) {
FileUtils.copyInputStreamToFile(fileEntity.getContent(), new File(fileSavePath));
}
} catch (IOException e) {
isSucceed = false;
}
httpGet.releaseConnection();
return isSucceed;
}
In contrast to the single line of code:
FileUtils.copyURLToFile(fileURL, new File(fileSavePath),
URLS_FETCH_TIMEOUT, URLS_FETCH_TIMEOUT);
This code will give you more control over a process and let you specify not only time-outs, but User-Agent and Referer values, which are critical for many websites.
Below is the sample code to download a movie from the Internet with Java code:
URL url = new
URL("http://103.66.178.220/ftp/HDD2/Hindi%20Movies/2018/Hichki%202018.mkv");
BufferedInputStream bufferedInputStream = new BufferedInputStream(url.openStream());
FileOutputStream stream = new FileOutputStream("/home/sachin/Desktop/test.mkv");
int count = 0;
byte[] b1 = new byte[100];
while((count = bufferedInputStream.read(b1)) != -1) {
System.out.println("b1:" + b1 + ">>" + count + ">> KB downloaded:" + new File("/home/sachin/Desktop/test.mkv").length()/1024);
stream.write(b1, 0, count);
}
Solution on java.net.http.HttpClient using Authorization:
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.GET()
.header("Accept", "application/json")
// .header("Authorization", "Basic ci5raG9kemhhZXY6NDdiYdfjlmNUM=") if you need
.uri(URI.create("https://jira.google.ru/secure/attachment/234096/screenshot-1.png"))
.build();
HttpResponse<InputStream> response = client.send(request, HttpResponse.BodyHandlers.ofInputStream());
try (InputStream in = response.body()) {
Files.copy(in, Paths.get(target + "screenshot-1.png"), StandardCopyOption.REPLACE_EXISTING);
}
To summarize (and somehow polish and update) previous answers. The three following methods are practically equivalent. (I added explicit timeouts, because I think they are a must. Nobody wants a download to freeze forever when the connection is lost.)
public static void saveUrl1(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (BufferedInputStream in = new BufferedInputStream(
streamFromUrl(url, secsConnectTimeout,secsReadTimeout));
OutputStream fout = Files.newOutputStream(file)) {
final byte data[] = new byte[8192];
int count;
while((count = in.read(data)) > 0)
fout.write(data, 0, count);
}
}
public static void saveUrl2(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (ReadableByteChannel rbc = Channels.newChannel(
streamFromUrl(url, secsConnectTimeout, secsReadTimeout)
);
FileChannel channel = FileChannel.open(file,
StandardOpenOption.CREATE,
StandardOpenOption.TRUNCATE_EXISTING,
StandardOpenOption.WRITE)
) {
channel.transferFrom(rbc, 0, Long.MAX_VALUE);
}
}
public static void saveUrl3(final Path file, final URL url,
int secsConnectTimeout, int secsReadTimeout))
throws MalformedURLException, IOException {
// Files.createDirectories(file.getParent()); // Optional, make sure parent directory exists
try (InputStream in = streamFromUrl(url, secsConnectTimeout,secsReadTimeout) ) {
Files.copy(in, file, StandardCopyOption.REPLACE_EXISTING);
}
}
public static InputStream streamFromUrl(URL url,int secsConnectTimeout,int secsReadTimeout) throws IOException {
URLConnection conn = url.openConnection();
if(secsConnectTimeout>0)
conn.setConnectTimeout(secsConnectTimeout*1000);
if(secsReadTimeout>0)
conn.setReadTimeout(secsReadTimeout*1000);
return conn.getInputStream();
}
I don't find significant differences, and all seem right to me. They are safe and efficient. (Differences in speed seem hardly relevant - I write 180 MB from the local server to a SSD disk in times that fluctuate around 1.2 to 1.5 secs). They don't require external libraries. All work with arbitrary sizes and (to my experience) HTTP redirections.
Additionally, all throw FileNotFoundException if the resource is not found (error 404, typically), and java.net.UnknownHostException if the DNS resolution failed; other IOException correspond to errors during transmission.
There is a method, U.fetch(url), in the underscore-java library.
File pom.xml:
<dependency>
<groupId>com.github.javadev</groupId>
<artifactId>underscore</artifactId>
<version>1.84</version>
</dependency>
Code example:
import com.github.underscore.U;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Download {
public static void main(String[] args) throws IOException {
Files.write(Paths.get("data.bin"),
U.fetch("https://stackoverflow.com/questions"
+ "/921262/how-to-download-and-save-a-file-from-internet-using-java").blob());
}
}
You can do this in one line using netloader for Java:
new NetFile(new File("my/zips/1.zip"), "https://example.com/example.zip", -1).load(); // Returns true if succeed, otherwise false.
This can read a file on the Internet and write it into a file.
import java.net.URL;
import java.io.FileOutputStream;
import java.io.File;
public class Download {
public static void main(String[] args) throws Exception {
URL url = new URL("https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_272x92dp.png"); // Input URL
FileOutputStream out = new FileOutputStream(new File("out.png")); // Output file
out.write(url.openStream().readAllBytes());
out.close();
}
}
There is an issue with simple usage of:
org.apache.commons.io.FileUtils.copyURLToFile(URL, File)
if you need to download and save very large files, or in general if you need automatic retries in case connection is dropped.
I suggest Apache HttpClient in such cases, along with org.apache.commons.io.FileUtils. For example:
GetMethod method = new GetMethod(resource_url);
try {
int statusCode = client.executeMethod(method);
if (statusCode != HttpStatus.SC_OK) {
logger.error("Get method failed: " + method.getStatusLine());
}
org.apache.commons.io.FileUtils.copyInputStreamToFile(
method.getResponseBodyAsStream(), new File(resource_file));
} catch (HttpException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
method.releaseConnection();
}
First method using the new channel
ReadableByteChannel aq = Channels.newChannel(new url("https//asd/abc.txt").openStream());
FileOutputStream fileOS = new FileOutputStream("C:Users/local/abc.txt")
FileChannel writech = fileOS.getChannel();
Second method using FileUtils
FileUtils.copyURLToFile(new url("https//asd/abc.txt", new local file on system("C":/Users/system/abc.txt"));
Third method using
InputStream xy = new ("https//asd/abc.txt").openStream();
This is how we can download file by using basic Java code and other third-party libraries. These are just for quick reference. Please google with the above keywords to get detailed information and other options.
If you are behind a proxy, you can set the proxies in the Java program as below:
Properties systemSettings = System.getProperties();
systemSettings.put("proxySet", "true");
systemSettings.put("https.proxyHost", "HTTPS proxy of your org");
systemSettings.put("https.proxyPort", "8080");
If you are not behind a proxy, don't include the lines above in your code. Full working code to download a file when you are behind a proxy.
public static void main(String[] args) throws IOException {
String url = "https://raw.githubusercontent.com/bpjoshi/fxservice/master/src/test/java/com/bpjoshi/fxservice/api/TradeControllerTest.java";
OutputStream outStream = null;
URLConnection connection = null;
InputStream is = null;
File targetFile = null;
URL server = null;
// Setting up proxies
Properties systemSettings = System.getProperties();
systemSettings.put("proxySet", "true");
systemSettings.put("https.proxyHost", "HTTPS proxy of my organisation");
systemSettings.put("https.proxyPort", "8080");
// The same way we could also set proxy for HTTP
System.setProperty("java.net.useSystemProxies", "true");
// Code to fetch file
try {
server = new URL(url);
connection = server.openConnection();
is = connection.getInputStream();
byte[] buffer = new byte[is.available()];
is.read(buffer);
targetFile = new File("src/main/resources/targetFile.java");
outStream = new FileOutputStream(targetFile);
outStream.write(buffer);
} catch (MalformedURLException e) {
System.out.println("THE URL IS NOT CORRECT ");
e.printStackTrace();
} catch (IOException e) {
System.out.println("I/O exception");
e.printStackTrace();
}
finally{
if(outStream != null)
outStream.close();
}
}
public class DownloadManager {
static String urls = "[WEBSITE NAME]";
public static void main(String[] args) throws IOException{
URL url = verify(urls);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream in = null;
String filename = url.getFile();
filename = filename.substring(filename.lastIndexOf('/') + 1);
FileOutputStream out = new FileOutputStream("C:\\Java2_programiranje/Network/DownloadTest1/Project/Output" + File.separator + filename);
in = connection.getInputStream();
int read = -1;
byte[] buffer = new byte[4096];
while((read = in.read(buffer)) != -1){
out.write(buffer, 0, read);
System.out.println("[SYSTEM/INFO]: Downloading file...");
}
in.close();
out.close();
System.out.println("[SYSTEM/INFO]: File Downloaded!");
}
private static URL verify(String url){
if(!url.toLowerCase().startsWith("http://")) {
return null;
}
URL verifyUrl = null;
try{
verifyUrl = new URL(url);
}catch(Exception e){
e.printStackTrace();
}
return verifyUrl;
}
}

Java Image Downloader Can't Skip if Error

I am Referencing an array to generate urls to download images from however if the url is dead the program errors and stops i need it to skip that url if an error occurs but I can not figure out how any ideas?
package getimages2;
public class Extractimages {
public static String url_to_get = "http://www.url.com/upc/11206007076";
public static long urlupc;
public static String URLarray[] = new String[3000];
public static void main(String args[]) throws Exception {
for(int apples = 0; apples<URLarray.length; apples++){
String UPCarray[] = {"051000012616","051000012913","051000012937"};
url_to_get = "http://www.url/upc/" + UPCarray[apples];
String webUrl = url_to_get;
String url2 = url_to_get;
URL url = new URL(webUrl);
URLConnection connection = url.openConnection();
InputStream is = connection.getInputStream();
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
HTMLEditorKit htmlKit = new HTMLEditorKit();
HTMLDocument htmlDoc = (HTMLDocument) htmlKit.createDefaultDocument();
HTMLEditorKit.Parser parser = new ParserDelegator();
HTMLEditorKit.ParserCallback callback = htmlDoc.getReader(0);
parser.parse(br, callback, true);
for (HTMLDocument.Iterator iterator = htmlDoc.getIterator(HTML.Tag.IMG); iterator.isValid(); iterator.next()) {
AttributeSet attributes = iterator.getAttributes();
String imgSrc = (String) attributes.getAttribute(HTML.Attribute.SRC);
if (imgSrc != null && (imgSrc.endsWith(".jpg") || (imgSrc.endsWith(".png")) || (imgSrc.endsWith(".jpeg")) || (imgSrc.endsWith(".bmp")) || (imgSrc.endsWith(".ico")))) {
try {
downloadImage(webUrl, imgSrc);
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
}
}
}
}
public static String right(String value, int length) {
return value.substring(value.length() - length);}
private static void downloadImage(String url, String imgSrc) throws IOException {
BufferedImage image = null;
try {
if (!(imgSrc.startsWith("http"))) {
url = url + imgSrc;
} else {
url = imgSrc;
}
String webUrl = url_to_get;
String imagename = right(webUrl , 12);
imgSrc = imgSrc.substring(imgSrc.lastIndexOf("/") + 1);
String imageFormat = null;
imageFormat = imgSrc.substring(imgSrc.lastIndexOf(".") + 1);
String imgPath = null;
imgPath = "C:/Users/Noah/Desktop/photos/" + imagename + ".jpg";
URL imageUrl = new URL(url);
image = ImageIO.read(imageUrl);
if (image != null) {
File file = new File(imgPath);
ImageIO.write(image, imageFormat, file);
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
The program runs through the array fine but once it generates a dead url it errors and exits i need it to skip that url and move on.
Thanks for any help.
Well, you'll need to have a way to check if the URL you've generated is valid. There's some answers on that here:
How to check for a valid URL in Java?

Java specific image downloader with multiple URL's

I can't figure out how to reference a specific line of text in a txt file. I need a specific image from a specific list of URL's and I am trying to generate the URL's by concatenating a URL prefix and a search number from a list of numbers in a txt file. I can't figure out how to reference the txt file and get a string from a line number.
package getimages;
public class ExtractAllImages {
public static int url_to_get = 1;
public static String urlupc;
public static String urlpre = "http://urlineedimagefrom/searchfolder/";
public static String url2 = "" + urlpre + urlupc + "";
public static void main(String args[]) throws Exception {
while(url_to_get > 2622){
String line = Files.readAllLines(Paths.get("file_on_my_desktop.txt")).get(url_to_get);
urlupc = line;
url2 = "" + urlpre + urlupc + "";
String webUrl = url2;
URL url = new URL(webUrl);
URLConnection connection = url.openConnection();
InputStream is = connection.getInputStream();
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
HTMLEditorKit htmlKit = new HTMLEditorKit();
HTMLDocument htmlDoc = (HTMLDocument) htmlKit.createDefaultDocument();
HTMLEditorKit.Parser parser = new ParserDelegator();
HTMLEditorKit.ParserCallback callback = htmlDoc.getReader(0);
parser.parse(br, callback, true);
for (HTMLDocument.Iterator iterator = htmlDoc.getIterator(HTML.Tag.IMG); iterator.isValid(); iterator.next()) {
AttributeSet attributes = iterator.getAttributes();
String imgSrc = (String) attributes.getAttribute(HTML.Attribute.SRC);
if (imgSrc != null && (imgSrc.endsWith(".jpg") || (imgSrc.endsWith(".png")) || (imgSrc.endsWith(".jpeg")) || (imgSrc.endsWith(".bmp")) || (imgSrc.endsWith(".ico")))) {
try {
downloadImage(webUrl, imgSrc);
} catch (IOException ex) {
System.out.println(ex.getMessage());
}
}
}
}
}
public static String right(String value, int length) {
return value.substring(value.length() - length);}
private static void downloadImage(String url, String imgSrc) throws IOException {
BufferedImage image = null;
try {
if (!(imgSrc.startsWith("http"))) {
url = url + imgSrc;
} else {
url = imgSrc;
}
String webUrl = url2;
String imagename = right(webUrl , 12);
imgSrc = imgSrc.substring(imgSrc.lastIndexOf("/") + 1);
String imageFormat = null;
imageFormat = imgSrc.substring(imgSrc.lastIndexOf(".") + 1);
String imgPath = null;
imgPath = "C:/Users/Noah/Desktop/photos/" + urlupc + ".jpg";
URL imageUrl = new URL(url);
image = ImageIO.read(imageUrl);
if (image != null) {
File file = new File(imgPath);
ImageIO.write(image, imageFormat, file);
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
My error is in setting the String line, My txt file has 2622 lines and I can't reference the file on my desktop and im not sure how to set the file path to my desktop? Sorry I'm not good at java.
Thanks for any help.
From what I understood you don't know how to access a file that is on your Desktop. Try doing this:
String path = "C:\\Users\\" + System.getProperty("user.name") + "\\Desktop\\" + fileName + ".txt";
Let me explain. We are using
System.getProperty("user.name")
to get the name of the user, if you don't want to use it then you can replace it with the name of your user. Then we are using
"\\Desktop\\"
to acess the Desktop, and finally we are adding the
fileName + ".txt"
to access the file we want that has the extension '.txt'.

How to check if a download process is complete

My application will parse a XML file from my web server then download the files. My application will get the contents of "URL" from in XML file and download it using Common-IO's FileUtils. My problem is, how do I know that my download process is done or not. There is a lot of "URL" in my XML file.
Here is some part of my code:
private Proxy proxy = Proxy.NO_PROXY;
public void downloadLibrary()
{
System.out.println("Start downloading libraries from server...");
try
{
URL resourceUrl = new URL("http://www.example.com/libraries.xml");
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(resourceUrl.openConnection(proxy).getInputStream());
NodeList nodeLst = doc.getElementsByTagName("Contents");
for (int i = 0; i < nodeLst.getLength(); i++)
{
Node node = nodeLst.item(i);
if (node.getNodeType() == 1)
{
Element element = (Element)node;
String key = element.getElementsByTagName("Key").item(0).getChildNodes().item(0).getNodeValue();
File f = new File(launcher.getWorkingDirectory(), key);
downloadFile("http://www.example.com/" + key, f, "libraries");
}
}
}
catch(Exception e)
{
System.out.println("Error was found when trying to download libraries file " + e);
}
}
public void downloadFile(final String url, final File path, final String fileName)
{
SwingWorker<Void, Void> worker = new SwingWorker<Void, Void>()
{
#Override
protected Void doInBackground() throws Exception
{
launcher.println("Downloading file " + fileName + "...");
try
{
URL fileURL = new URL(url);
ReadableByteChannel rbc = Channels.newChannel(fileURL.openStream());
FileOutputStream fos = new FileOutputStream(path);
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
}
catch(Exception e)
{
System.out.println("Cannot download file : " + fileName + " " + e);
}
return null;
}
#Override
public void done()
{
System.out.println(fileName + " had downloaded sucessfully");
}
};
worker.execute();
}
If you send each download with one thread, which I recomend, You have severals possibilities.
Use of Future and FutureTask to wait until all the threads are done.
To use thread.join() to wait until the thread stops.

Categories

Resources