How do you Programmatically Download a Webpage in Java - java

I would like to be able to fetch a web page's html and save it to a String, so I can do some processing on it. Also, how could I handle various types of compression.
How would I go about doing that using Java?

I'd use a decent HTML parser like Jsoup. It's then as easy as:
String html = Jsoup.connect("http://stackoverflow.com").get().html();
It handles GZIP and chunked responses and character encoding fully transparently. It offers more advantages as well, like HTML traversing and manipulation by CSS selectors like as jQuery can do. You only have to grab it as Document, not as a String.
Document document = Jsoup.connect("http://google.com").get();
You really don't want to run basic String methods or even regex on HTML to process it.
See also:
What are the pros and cons of leading HTML parsers in Java?

Here's some tested code using Java's URL class. I'd recommend do a better job than I do here of handling the exceptions or passing them up the call stack, though.
public static void main(String[] args) {
URL url;
InputStream is = null;
BufferedReader br;
String line;
try {
url = new URL("http://stackoverflow.com/");
is = url.openStream(); // throws an IOException
br = new BufferedReader(new InputStreamReader(is));
while ((line = br.readLine()) != null) {
System.out.println(line);
}
} catch (MalformedURLException mue) {
mue.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
} finally {
try {
if (is != null) is.close();
} catch (IOException ioe) {
// nothing to see here
}
}
}

Bill's answer is very good, but you may want to do some things with the request like compression or user-agents. The following code shows how you can various types of compression to your requests.
URL url = new URL(urlStr);
HttpURLConnection conn = (HttpURLConnection) url.openConnection(); // Cast shouldn't fail
HttpURLConnection.setFollowRedirects(true);
// allow both GZip and Deflate (ZLib) encodings
conn.setRequestProperty("Accept-Encoding", "gzip, deflate");
String encoding = conn.getContentEncoding();
InputStream inStr = null;
// create the appropriate stream wrapper based on
// the encoding type
if (encoding != null && encoding.equalsIgnoreCase("gzip")) {
inStr = new GZIPInputStream(conn.getInputStream());
} else if (encoding != null && encoding.equalsIgnoreCase("deflate")) {
inStr = new InflaterInputStream(conn.getInputStream(),
new Inflater(true));
} else {
inStr = conn.getInputStream();
}
To also set the user-agent add the following code:
conn.setRequestProperty ( "User-agent", "my agent name");

Well, you could go with the built-in libraries such as URL and URLConnection, but they don't give very much control.
Personally I'd go with the Apache HTTPClient library.
Edit: HTTPClient has been set to end of life by Apache. The replacement is: HTTP Components

All the above mentioned approaches do not download the web page text as it looks in the browser. these days a lot of data is loaded into browsers through scripts in html pages. none of above mentioned techniques supports scripts, they just downloads the html text only. HTMLUNIT supports the javascripts. so if you are looking to download the web page text as it looks in the browser then you should use HTMLUNIT.

You'd most likely need to extract code from a secure web page (https protocol). In the following example, the html file is being saved into c:\temp\filename.html Enjoy!
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URL;
import javax.net.ssl.HttpsURLConnection;
/**
* <b>Get the Html source from the secure url </b>
*/
public class HttpsClientUtil {
public static void main(String[] args) throws Exception {
String httpsURL = "https://stackoverflow.com";
String FILENAME = "c:\\temp\\filename.html";
BufferedWriter bw = new BufferedWriter(new FileWriter(FILENAME));
URL myurl = new URL(httpsURL);
HttpsURLConnection con = (HttpsURLConnection) myurl.openConnection();
con.setRequestProperty ( "User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0" );
InputStream ins = con.getInputStream();
InputStreamReader isr = new InputStreamReader(ins, "Windows-1252");
BufferedReader in = new BufferedReader(isr);
String inputLine;
// Write each line into the file
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
bw.write(inputLine);
}
in.close();
bw.close();
}
}

To do so using NIO.2 powerful Files.copy(InputStream in, Path target):
URL url = new URL( "http://download.me/" );
Files.copy( url.openStream(), Paths.get("downloaded.html" ) );

On a Unix/Linux box you could just run 'wget' but this is not really an option if you're writing a cross-platform client. Of course this assumes that you don't really want to do much with the data you download between the point of downloading it and it hitting the disk.

Get help from this class it get code and filter some information.
public class MainActivity extends AppCompatActivity {
EditText url;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate( savedInstanceState );
setContentView( R.layout.activity_main );
url = ((EditText)findViewById( R.id.editText));
DownloadCode obj = new DownloadCode();
try {
String des=" ";
String tag1= "<div class=\"description\">";
String l = obj.execute( "http://www.nu.edu.pk/Campus/Chiniot-Faisalabad/Faculty" ).get();
url.setText( l );
url.setText( " " );
String[] t1 = l.split(tag1);
String[] t2 = t1[0].split( "</div>" );
url.setText( t2[0] );
}
catch (Exception e)
{
Toast.makeText( this,e.toString(),Toast.LENGTH_SHORT ).show();
}
}
// input, extrafunctionrunparallel, output
class DownloadCode extends AsyncTask<String,Void,String>
{
#Override
protected String doInBackground(String... WebAddress) // string of webAddress separate by ','
{
String htmlcontent = " ";
try {
URL url = new URL( WebAddress[0] );
HttpURLConnection c = (HttpURLConnection) url.openConnection();
c.connect();
InputStream input = c.getInputStream();
int data;
InputStreamReader reader = new InputStreamReader( input );
data = reader.read();
while (data != -1)
{
char content = (char) data;
htmlcontent+=content;
data = reader.read();
}
}
catch (Exception e)
{
Log.i("Status : ",e.toString());
}
return htmlcontent;
}
}
}

Jetty has an HTTP client which can be use to download a web page.
package com.zetcode;
import org.eclipse.jetty.client.HttpClient;
import org.eclipse.jetty.client.api.ContentResponse;
public class ReadWebPageEx5 {
public static void main(String[] args) throws Exception {
HttpClient client = null;
try {
client = new HttpClient();
client.start();
String url = "http://example.com";
ContentResponse res = client.GET(url);
System.out.println(res.getContentAsString());
} finally {
if (client != null) {
client.stop();
}
}
}
}
The example prints the contents of a simple web page.
In a Reading a web page in Java tutorial I have written six examples of dowloading a web page programmaticaly in Java using URL, JSoup, HtmlCleaner, Apache HttpClient, Jetty HttpClient, and HtmlUnit.

I used the actual answer to this post (url) and writing the output into a
file.
package test;
import java.net.*;
import java.io.*;
public class PDFTest {
public static void main(String[] args) throws Exception {
try {
URL oracle = new URL("http://www.fetagracollege.org");
BufferedReader in = new BufferedReader(new InputStreamReader(oracle.openStream()));
String fileName = "D:\\a_01\\output.txt";
PrintWriter writer = new PrintWriter(fileName, "UTF-8");
OutputStream outputStream = new FileOutputStream(fileName);
String inputLine;
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
writer.println(inputLine);
}
in.close();
} catch(Exception e) {
}
}
}

Related

Java read from URL stream working selectively

Summary: Sample Java code that reads over a URLConnection reads only certain URLs, not others.
Details: I have this sample Java code that I am using to read over a URLConnection. When the URL is "http://www.example.com", the code reads the page content without any issues. However, if the URL is "http://www.cnn.com", the page content is not read
public class StackOverflow {
public static void main(String[] args) throws Exception {
BufferedReader inputStream = null;
try {
String urlStr = "http://www.cnn.com"; // Does not work
// urlStr = "http://www.example.com"; // **Works if this line is uncommented**
URL url = new URL(urlStr);
inputStream = new BufferedReader(new InputStreamReader(url.openStream()));
String textLine = null;
while((textLine = inputStream.readLine()) != null) {
System.out.println(textLine);
}
}
catch (Exception e) {
e.printStackTrace();
}
finally {
if(inputStream != null) inputStream.close();
}
}
}
CNN redirects from http to https but your call doesn't follow redirects. You are getting a 307 with an empty body so the readline results in a null and your loop is skipped. Try with https for CNN.

Reading HTML content into java program

I am trying to get recharge plan information of service provider into my java program, the website contains dynamic data, and when i am fetching the URL using URLConnection i am only getting the static content,I want to automate the recharge plans of different website into my program.
package com.fs.store.test;
import java.net.*;
import java.io.*;
public class MyURLConnection
{
private static final String baseTataUrl = "https://www.tatadocomo/pre-paypacks";`enter code here`
public MyURLConnection()
{
}
public void getMeData()
{
URLConnection urlConnection = null;
BufferedReader in = null;
try
{
URL url = new URL(baseTataUrl);
urlConnection = url.openConnection();
HttpURLConnection connection = null;
connection = (HttpURLConnection) urlConnection;
in = new BufferedReader(new InputStreamReader(urlConnection.getInputStream()/*,"UTF-8"*/));
String currentLine = null;
StringBuilder line = new StringBuilder();
while((currentLine = in.readLine()) != null)
{
System.out.println(currentLine);
line = line.append(currentLine.trim());
}
}catch(IOException e)
{
e.printStackTrace();
}
finally{
try{
in.close();
}
catch(Exception e){
e.printStackTrace();
}
}
}
public static void main (String args[])
{
MyURLConnection test = new MyURLConnection();
System.out.println("About to call getMeData()");
test.getMeData();
}
}
You must use one of HtmlEditorKits
with Javascript enabled in your browser
and then get content.
See examples:
oreilly
Inspect the traffjc. Firefox has a TamperData plugin for instance. Then you may communicate more directly.
Use apache's HttpClient to facilitate the communication, instead of plain URL.
Maybe use some JSON library if JSON data are coming back.
More details, but you might now skip some loading.

Issue with downloading webpage in java?

So I'm trying to download the text of an aspx webpage (Roblox) with java. My code looks like this:
URL url;
InputStream is = null;
DataInputStream dis;
String line = "";
try {
System.out.println("connecting");
url = new URL("http://www.roblox.com");
is = url.openStream(); // throws an IOException
dis = new DataInputStream(new BufferedInputStream(is));
while ((line = dis.readLine()) != null) {
System.out.println(line);
}
} catch (Exception ex) {
ex.printStackTrace();
} finally {
try {
is.close();
} catch (IOException ioe) {}
}
And it works for www.roblox.com. However, when I try to navigate to a different page - http://www.roblox.com/My/Money.aspx#/#TradeCurrency_tab
- it doesn't work, and just loads the www.roblox.com screen.
Could anyone help clarify this? Any help would be appreciated.
You are getting different content in java than you see in the browser because the server adds the following header to the response:
Location=https://www.roblox.com/Login/Default.aspx?ReturnUrl=%2fMy%2fMoney.aspx
You should get headers' values from URLConnection and redirect manually if the 'Location' header is present. As far as I know even if you used HttpConnection you won't be redirect automatically to 'https'
EDITED:
You could do it with smth like this (I removed other code, like exception handling just to focus on redirection, so don't take it as proper 'coding' example):
public static void main(String[] args) throws Exception {
printPage("http://www.roblox.com/My/Money.aspx#/#TradeCurrency_tab");
}
public static void printPage(String address) throws Exception {
String line = null;
System.out.println("connecting to:" + address);
URL url = new URL(address);
URLConnection conn = url.openConnection();
String redirectAdress = conn.getHeaderField("Location");
if (redirectAdress != null) {
printPage(redirectAdress);
} else {
InputStream is = url.openStream();
DataInputStream dis = new DataInputStream(new BufferedInputStream(is));
while ((line = dis.readLine()) != null) {
System.out.println(line);
}
}
}
Judging by the URL and the use of # I suspect this page is using javascript to dynamically create pages.
You can use something like http://seleniumhq.org/ to emulate a web-browser (including cookies) and this is a far more reliable approach for any kind of dynamic web content.
// The Firefox driver supports javascript
WebDriver driver = new FirefoxDriver();
// Go to the roblox page
driver.get("http://www.roblox.com");
System.out.println(driver.getPageSource());
Of course, there are many better ways to access elements of a page via Selenium's WebDriver API: http://selenium.googlecode.com/svn/trunk/docs/api/java/org/openqa/selenium/WebDriver.html
Download the JAR and all the deps in one file: http://code.google.com/p/selenium/downloads/detail?name=selenium-server-standalone-2.27.0.jar
And note, you can navigate to other pages via code: http://seleniumhq.org/docs/03_webdriver.html -
WebElement link = driver.findElement(By.linkText("Click Here Or Whatever"));
link.click();
then
System.out.println(driver.getPageSource());
will get the page text on the next page.

Need to write JUnit Test case

I am a fresher java developer.
But I dont have much knowledge about writing Junit test cases.
I am going to have a job test soon.
For which they want me to write a program
To read HTML from any website say "http://www.google.com" ( You
can use any API of inbuilt APIs in Java like URLConnection )
Print on console the HTML from the url above and save it to a file (
web-content.txt) in local machine.
JUnit test cases for above
programme.
I have completed the first two steps as below:
import java.io.*;
import java.net.*;
public class JavaSourceViewer{
public static void main (String[] args) throws IOException{
System.out.print("Enter url of local for viewing html source code: ");
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String url = br.readLine();
try {
URL u = new URL(url);
HttpURLConnection uc = (HttpURLConnection) u.openConnection();
int code = uc.getResponseCode();
String response = uc.getResponseMessage();
System.out.println("HTTP/1.x " + code + " " + response);
InputStream in = new BufferedInputStream(uc.getInputStream());
Reader r = new InputStreamReader(in);
int c;
FileOutputStream fout=new FileOutputStream("D://web-content.txt");
while((c = r.read()) != -1){
System.out.print((char)c);
fout.write(c);
}
fout.close();
} catch(MalformedURLException ex) {
System.err.println(url + " is not a valid URL.");
} catch (IOException ie) {
System.out.println("Input/Output Error: " + ie.getMessage());
}
}
}
Now I need help with 3rd step.
You need to extract a method therefore like this
package abc.def;
import java.io.*;
import java.net.*;
public class JavaSourceViewer {
public static void main(String[] args) throws IOException {
System.out.print("Enter url of local for viewing html source code: ");
BufferedReader br =
new BufferedReader(new InputStreamReader(System.in));
String url = br.readLine();
try {
FileOutputStream fout = new FileOutputStream("D://web-content.txt");
writeURL2Stream(url, fout);
fout.close();
} catch (MalformedURLException ex) {
System.err.println(url + " is not a valid URL.");
} catch (IOException ie) {
System.out.println("Input/Output Error: " + ie.getMessage());
}
}
private static void writeURL2Stream(String url, OutputStream fout)
throws MalformedURLException, IOException {
URL u = new URL(url);
HttpURLConnection uc = (HttpURLConnection) u.openConnection();
int code = uc.getResponseCode();
String response = uc.getResponseMessage();
System.out.println("HTTP/1.x " + code + " " + response);
InputStream in = new BufferedInputStream(uc.getInputStream());
Reader r = new InputStreamReader(in);
int c;
while ((c = r.read()) != -1) {
System.out.print((char) c);
fout.write(c);
}
}
}
Ok, now you are able to write the JUnit-Test.
package abc.def;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.net.MalformedURLException;
import org.junit.Test;
import junit.framework.TestCase;
public class MainTestCase extends TestCase {
#Test
public static void test() throws MalformedURLException, IOException{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
JavaSourceViewer.writeURL2Stream("http://www.google.de", baos);
assertTrue(baos.toString().contains("google"));
}
}
First move your code from the main method to other methods in the
JavaSourceViewer.
Second make these methods return something that
you can test. e.g. the file name of the output file or the Reader.
Then create a class for the unit test
import org.junit.* ;
import static org.junit.Assert.* ;
public class JavaSourceViewerTest {
#Test
public void testJavaSourceViewer() {
String url = "...";
JavaSourceViewer jsv = new JavaSourceViewer();
// call you methods here to parse the site
jsv.xxxMethod(...)
....
// call you checks here like:
// <file name to save to output data from your code, actual filename> - e.g.
assertEquals(jsv.getOutputFile(), "D://web-content.txt");
....
}
}
To execute the Junit use an IDE (like eclipse) or put the junit.jar file in the classpath and from the console:
java org.junit.runner.JUnitCore JavaSourceViewerTest
Your steps are not right. Doing it in this way would definitely help, 3, then 1, and then 2. Doing it this way will force you to think in terms of functionalities and units. And the result code will be testable, without doing anything special. Test first also guide the design of your system, besides providing you a safety net.
P.S. Never try to write the code before the test. It's simply not natural neither it gives much value, as you can see.
Now, to test the 1st unit, you can compare the string, the html from google.com, with some existing string. But that test case will break if Google changes it's page. Other way, is to just check the HTTP code from the header, if that's 200, you are fine. Just an idea.
For the second one, you can compare the string, you read from the web page, to string you wrote to the file, by reading the file.
String str = "";
URL oracle = new URL("http://www.google.com/");
BufferedReader in = new BufferedReader(
new InputStreamReader(oracle.openStream()));
File file = new File("C:/Users/rohit/Desktop/rr1.txt");
String inputLine;
FileWriter fw = new FileWriter(file.getAbsoluteFile());
BufferedWriter bw = new BufferedWriter(fw);
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
bw.write(inputLine);
}
bw.close();
in.close();

Check if URL is valid or exists in java

I'm writing a Java program which hits a list of urls and needs to first know if the url exists. I don't know how to go about this and cant find the java code to use.
The URL is like this:
http: //ip:port/URI?Attribute=x&attribute2=y
These are URLs on our internal network that would return an XML if valid.
Can anyone suggest some code?
You could just use httpURLConnection. If it is not valid you won't get anything back.
HttpURLConnection connection = null;
try{
URL myurl = new URL("http://www.myURL.com");
connection = (HttpURLConnection) myurl.openConnection();
//Set request to header to reduce load as Subirkumarsao said.
connection.setRequestMethod("HEAD");
int code = connection.getResponseCode();
System.out.println("" + code);
} catch {
//Handle invalid URL
}
Or you could ping it like you would from CMD and record the response.
String myurl = "google.com"
String ping = "ping " + myurl
try {
Runtime r = Runtime.getRuntime();
Process p = r.exec(ping);
r.exec(ping);
BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream()));
String inLine;
BufferedWriter write = new BufferedWriter(new FileWriter("C:\\myfile.txt"));
while ((inLine = in.readLine()) != null) {
write.write(inLine);
write.newLine();
}
write.flush();
write.close();
in.close();
} catch (Exception ex) {
//Code here for what you want to do with invalid URLs
}
}
A malformed url will give you an exception.
To know if you the url is active or not you have to hit the url. There is no other way.
You can reduce the load by requesting for a header from the url.
package com.my;
import java.io.IOException;
import java.io.InputStream;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.UnknownHostException;
public class StrTest {
public static void main(String[] args) throws IOException {
try {
URL url = new URL("http://www.yaoo.coi");
InputStream i = null;
try {
i = url.openStream();
} catch (UnknownHostException ex) {
System.out.println("THIS URL IS NOT VALID");
}
if (i != null) {
System.out.println("Its working");
}
} catch (MalformedURLException e) {
e.printStackTrace();
}
}
}
output : THIS URL IS NOT VALID
Open a connection and check if the response contains valid XML? Was that too obvious or do you look for some other magic?
You may want to use HttpURLConnection and check for error status:
HttpURLConnection javadoc

Categories

Resources