I am trying to create a java console app which will download a file from a URL. The file is created at Runtime and I don't know the file name. When I copy and paste the URL in the browser the save file pop up comes up to save the file. Now I want to create Java code which logs on to the server (validates user I have that done) go to that URL and download the file.
Sergii's answer is a good start; however, if the website you want to use requires more that a simple download, consider using Apache's HttpClient.
It supports cookies, authentication, URIs as well as URLs, and file upload.
Simple exapmple from docs, just to put your url insted of http://www.oracle.com/, and change System.out.println(inputLine); to write file on disk.
import java.net.*;
import java.io.*;
public class URLReader {
public static void main(String[] args) throws Exception {
URL oracle = new URL("http://www.oracle.com/");
BufferedReader in = new BufferedReader(
new InputStreamReader(oracle.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
System.out.println(inputLine);
in.close();
}
}
Related
I'm new to coding and have decided to start my learning on Java. I've got NetBeans and have started to create a very basic web application. I'd like to be able to display values from a .txt file onto the webpage, and I've got this code to do so.
<%
BufferedReader in = new BufferedReader(new FileReader("Cats.txt"));
String line;
while((line = in.readLine()) != null)
{
out.println(line);
}
in.close();
%>
My text file is in the same folder as my src folder (As I've seen you need to put the file)
However, whenever I navigate to the web page I get a FileNotFound error. I've tried placing the files path in the FileReader but that gives an error due to the backslashes.
If anyone could help I'd be greatly appreciated
Currently it's looking for the file in the src directory of your application you should just be able move the file there and it should read it. If you would like to direct to a specific path you need to tell the IDE to treat the '\' as a normal slash to do this you need to close it off by using two '\'s instead of one eg:
<%
BufferedReader in = new BufferedReader(new
FileReader("C:\\MYPATH\\MYPATH2\\Cats.txt"));
String line;
while((line = in.readLine()) != null)
{
out.println(line);
}
in.close()
%>
I want to read and write a text document which i upload in "one drive" or "google drive" in java.
BufferedReader bufferedReader;
String line;
URL url = new URL("https://docs.google.com/document/d/1JLRmW7eaXHyyoPdNCyNN7AHWzKWaAPWgL84sxHUB7dQ/edit");
bufferedReader = new BufferedReader(new InputStreamReader(url.openStream()));
while ((line = bufferedReader.readLine()) != null) {
System.out.println(line);
}
With this code i only read the HTML documentation and thats not helpful.
In this text are some IPs from servers I have to connect with clients.
Note that i want to access this file from a java program, not an android app.
As i researched, the available google drive apis are not quite suited for java implementation,
since they are build for android solutions.
the available google drive apis are not quite suited for java implementation
Google APIs support Java. Android is Java, so the API (classes, methods you need to call, etc.) is the same on either platform.
The Drive API is part of the Google API Client Library. Here is a great overview:
Google API Client Library
A Developer's Guide with instructions on getting set up and some Examples.
The drive-cmdline-sample project would be a good place to start. There is a downloadFile method that looks like it will get the source file:
/** Downloads a file using either resumable or direct media download. */
private static void downloadFile(boolean useDirectDownload, File uploadedFile)
throws IOException {
// create parent directory (if necessary)
java.io.File parentDir = new java.io.File(DIR_FOR_DOWNLOADS);
if (!parentDir.exists() && !parentDir.mkdirs()) {
throw new IOException("Unable to create parent directory");
}
OutputStream out = new FileOutputStream(new java.io.File(parentDir, uploadedFile.getTitle()));
MediaHttpDownloader downloader =
new MediaHttpDownloader(httpTransport, drive.getRequestFactory().getInitializer());
downloader.setDirectDownloadEnabled(useDirectDownload);
downloader.setProgressListener(new FileDownloadProgressListener());
downloader.download(new GenericUrl(uploadedFile.getDownloadUrl()), out);
}
I have a url of a text file and I want to read it:
URL url = new URL("example.com/textfile.txt");
BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
String inpuline = null;
while ((inpuline = in.readLine()) != null) {
System.out.println(inpuline);
}
in.close();
The problem is when I change the Content of textfile.txt, my program does not realize the changes next time it runs.
After you change the txt file, you should verify that your server realized the changes and return the last version of your file. To verify this use a browser. If you didn't get the last version of your file something is wrong with the server. If you need to press Ctrl+F5 it means that the maybe some proxies or your browser cashed the old file.
After all trying the following workarounds may helps:
try {
URL url = new URL("example.com/textfile.txt");
Scanner s = new Scanner(url.openStream());
// read from your scanner
}
catch(IOException ex) {
ex.printStackTrace(); // for now, simply output it.
}
If you got the cached version of your file again, then try to use HttpURLConnection to download the file and write it to a temp file. Then read from that temp file and after that delete that temp file. Maybe downloading the file can force the server to get the newest version of that file. To avoid cached version of your file try this:
// Create a URLConnection object
URLConnection connection = myURL.openConnection();
// Disable caching
connection.setUseCaches(false);
Good Luck.
I have a scheduled task (using cron) inside my Spring MVC application. Inside the programmed task I have to get a CSV from an external server in the following link:
http://www.aemet.es/es/eltiempo/observacion/ultimosdatos_6172O_datos-horarios.csv?k=and&l=6172O&datos=det&w=0&f=temperatura&x=h24
And once I get it I have to parse it.
The problem comes when getting the file, as when I click on the previous link I can download it to my computer, but I don't know how to do that using Spring.... can you give me I hint??
UPDATE: I don't have any code yet, but I guess that must be something similar to the following code:
URL stockURL = new URL("http://example.com/stock.csv");
BufferedReader in = new BufferedReader(new InputStreamReader(stockUrl.openStream()));
CSVReader reader = new CSVReader(in);
But the problem is that my URL is not exactly a .csv. Whe I put the URL in a browser it looks like it is a redirect.
Thank you very much indeed.
Thank you all for your comments. Even if the URL doesn't have a CSV extension, i tried the following code (in Java, not Spring) but it works!!
URL stockURL = new URL("http://www.aemet.es/es/eltiempo/observacion/ultimosdatos_6172O_datos-horarios.csv?k=and&l=6172O&datos=det&w=0&f=temperatura&x=h24");
BufferedReader in = new BufferedReader(new InputStreamReader(stockURL.openStream()));
//CSVReader reader = new CSVReader(in);
String line;
while((line = in.readLine()) != null){
System.out.println(line);
}
So,I guess that using Spring is going to be really very similar to get the file, so thank you very much everybody!
I would like my web crawler to download all the browsed URL's locally. At the minute it will download every site it comes to but then overwrite the local file in each website visited. The crawler start at www.bbc.co.uk, downloads that file and then when it hits another it overwrites that file with the next URL. How can I make it download them in to single files so I have a collection at the end? I have this code below but I dont know where to go from here. Any advice would be great. The URL inside the brackets (URL) is a string which is used to manipulate all the browsed webpages.
URL url = new URL(URL);
BufferedWriter writer;
try (BufferedReader reader = new BufferedReader
(new InputStreamReader(url.openStream()))) {
writer = new BufferedWriter
(new FileWriter("c:/temp/data.html", true));
String line;
while ((line = reader.readLine()) != null) {
//System.out.println(line);
writer.write(line);
writer.newLine();
}
}
writer.close();
You need to give to your files a unique name.
You can save them in different folders (one root directory for each web site).
Or you can give them a unique name (using a counter for example).