I am trying to get recharge plan information of service provider into my java program, the website contains dynamic data, and when i am fetching the URL using URLConnection i am only getting the static content,I want to automate the recharge plans of different website into my program.
package com.fs.store.test;
import java.net.*;
import java.io.*;
public class MyURLConnection
{
private static final String baseTataUrl = "https://www.tatadocomo/pre-paypacks";`enter code here`
public MyURLConnection()
{
}
public void getMeData()
{
URLConnection urlConnection = null;
BufferedReader in = null;
try
{
URL url = new URL(baseTataUrl);
urlConnection = url.openConnection();
HttpURLConnection connection = null;
connection = (HttpURLConnection) urlConnection;
in = new BufferedReader(new InputStreamReader(urlConnection.getInputStream()/*,"UTF-8"*/));
String currentLine = null;
StringBuilder line = new StringBuilder();
while((currentLine = in.readLine()) != null)
{
System.out.println(currentLine);
line = line.append(currentLine.trim());
}
}catch(IOException e)
{
e.printStackTrace();
}
finally{
try{
in.close();
}
catch(Exception e){
e.printStackTrace();
}
}
}
public static void main (String args[])
{
MyURLConnection test = new MyURLConnection();
System.out.println("About to call getMeData()");
test.getMeData();
}
}
You must use one of HtmlEditorKits
with Javascript enabled in your browser
and then get content.
See examples:
oreilly
Inspect the traffjc. Firefox has a TamperData plugin for instance. Then you may communicate more directly.
Use apache's HttpClient to facilitate the communication, instead of plain URL.
Maybe use some JSON library if JSON data are coming back.
More details, but you might now skip some loading.
I am new to Java , but really want to become better at it. I'm trying to write a simple RSS reader. Here's the code:
import java.io.*;
import java.net.*;
public class RSSReader {
public static void main(String[] args) {
System.out.println(readRSS("http://www.usnews.com/rss/health-news"));
}
public static String readRSS(String urlAddress){
try {
URL rssUrl = new URL(urlAddress);
BufferedReader in = new BufferedReader(new InputStreamReader(rssUrl.openStream()));
String sourceCode = "";
String line;
while((line = in.readLine())!=null){
if(line.contains("<title>")){
int firstPos = line.indexOf("<title>");
String temp = line.substring(firstPos);
temp = temp.replace("<title>","");
int lastPos = temp.indexOf("</title>");
temp = temp.substring(0,lastPos);
sourceCode +=temp+"\n";
}
}
System.out.println("YAAAH"+sourceCode);
in.close();
return sourceCode;
} catch (MalformedURLException ue) {
System.out.println("Malformed URL");
} catch (IOException ioe) {
System.out.println("WTF?");
}
return null;
}
}
But it is catching IOException all the time, and I see "WTF".
I realised that the whole program fails when OpenStream() starts its' work.
I don't know how to fix it.
As indicated, you would need to set your proxy parameters/credentials right before you establish a connection.
Set proxy username and password only in case your proxy is authenticated.
public static String readRSS(String urlAddress) {
System.setProperty("http.proxyHost", YOUR_PROXY_HOST);
System.setProperty("http.proxyPort", YOUR_PROXY_PORT);
//Below 2 for authenticated proxies only
System.setProperty("http.proxyUser", YOUR_USERNAME);
System.setProperty("http.proxyPassword", YOUR_PASSWORD);
try {
...
I tested your method behind a proxy and it works perfectly, after setting the parameters i.e.
import java.io.*;
import java.net.*;
public class JavaSourceViewer{
public static void main(String[] args) throws IOException {
String java.io.BufferedReader.readLine() throws IOException
System.out.print("Enter url of local for viewing html source code: ");
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String url = br.readLine();
try {
URL u = new URL(url);
HttpURLConnection uc = (HttpURLConnection) u.openConnection();
int code = uc.getResponseCode();
String response = uc.getResponseMessage();
System.out.println("HTTP/1.x " + code + " " + response);
InputStream in = new BufferedInputStream(uc.getInputStream());
Reader r = new InputStreamReader(in);
int c;
FileOutputStream fout=new FileOutputStream("D://web-content.txt");
while ((c = r.read()) != -1) {
System.out.print((char)c);
fout.write(c);
}
fout.close();
} catch (MalformedURLException ex) {
System.err.println(url + " is not a valid URL.");
} catch (IOException ie) {
System.out.println("Input/Output Error: " + ie.getMessage());
}
}
}
I dont know what is wrong with this, but the code is not running just showing the error.
It is the code. Looks like you've got a copy/paste error. Method declarations are not permitted within the body of other methods. Remove this partial method declaration from the main method
String java.io.BufferedReader.readLine() throws IOException
Sometimes restarting the eclipse solves it. Also make sure the jdk path is accurate and try to configure it again and finally clean the project using the eclipse tool
I want to login to a website, but if I try it with this code:
package URL;
//Variables to hold the URL object and its connection to that URL.
import java.net.*;
import java.io.*;
public class URLLogin {
private static URL URLObj;
private static URLConnection connect;
public static void main(String[] args) {
try {
// Establish a URL and open a connection to it. Set it to output mode.
URLObj = new URL("http://login.szn.cz");
connect = URLObj.openConnection();
connect.setDoOutput(true);
}
catch (MalformedURLException ex) {
System.out.println("The URL specified was unable to be parsed or uses an invalid protocol. Please try again.");
System.exit(1);
}
catch (Exception ex) {
System.out.println("An exception occurred. " + ex.getMessage());
System.exit(1);
}
try {
// Create a buffered writer to the URLConnection's output stream and write our forms parameters.
BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(connect.getOutputStream(),"UTF-8"));
writer.write("username=S&password=s&login=Přihlásit se");
writer.close();
// Now establish a buffered reader to read the URLConnection's input stream.
BufferedReader reader = new BufferedReader(new InputStreamReader(connect.getInputStream()));
String lineRead = "";
Read all available lines of data from the URL and print them to screen.
while ((lineRead = reader.readLine()) != null) {
System.out.println(lineRead);
}
reader.close();
}
catch (Exception ex) {
System.out.println("There was an error reading or writing to the URL: " + ex.getMessage());
}
}
}
I get this error:
There was an error reading or writing to the URL: Server returned HTTP
response code: 405 for URL: http://login.szn.cz
Is here a way how I can login to this website? Or maybe I can use cookies in Opera browser with login information?
Thanks for all advices.
You can call the URL object's openConnection method to get a URLConnection object. You can use this URLConnection object to setup parameters and general request properties that you may need before connecting. Connection to the remote object represented by the URL is only initiated when the URLConnection.connect method is called. The following code opens a connection to the site example.com:
try {
URL myURL = new URL("http://login.szn.cz");
URLConnection myURLConnection = myURL.openConnection();
myURLConnection.connect();
}
catch (MalformedURLException e) {
// new URL() failed
// ...
}
catch (IOException e) {
// openConnection() failed
// ...
}
A new URLConnection object is created every time by calling the openConnection method of the protocol handler for this URL.
Also see these links..
http://www.coderanch.com/t/524061/open-source/Java-program-Login-website-url
Login on website with java
I would like to be able to fetch a web page's html and save it to a String, so I can do some processing on it. Also, how could I handle various types of compression.
How would I go about doing that using Java?
I'd use a decent HTML parser like Jsoup. It's then as easy as:
String html = Jsoup.connect("http://stackoverflow.com").get().html();
It handles GZIP and chunked responses and character encoding fully transparently. It offers more advantages as well, like HTML traversing and manipulation by CSS selectors like as jQuery can do. You only have to grab it as Document, not as a String.
Document document = Jsoup.connect("http://google.com").get();
You really don't want to run basic String methods or even regex on HTML to process it.
See also:
What are the pros and cons of leading HTML parsers in Java?
Here's some tested code using Java's URL class. I'd recommend do a better job than I do here of handling the exceptions or passing them up the call stack, though.
public static void main(String[] args) {
URL url;
InputStream is = null;
BufferedReader br;
String line;
try {
url = new URL("http://stackoverflow.com/");
is = url.openStream(); // throws an IOException
br = new BufferedReader(new InputStreamReader(is));
while ((line = br.readLine()) != null) {
System.out.println(line);
}
} catch (MalformedURLException mue) {
mue.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
} finally {
try {
if (is != null) is.close();
} catch (IOException ioe) {
// nothing to see here
}
}
}
Bill's answer is very good, but you may want to do some things with the request like compression or user-agents. The following code shows how you can various types of compression to your requests.
URL url = new URL(urlStr);
HttpURLConnection conn = (HttpURLConnection) url.openConnection(); // Cast shouldn't fail
HttpURLConnection.setFollowRedirects(true);
// allow both GZip and Deflate (ZLib) encodings
conn.setRequestProperty("Accept-Encoding", "gzip, deflate");
String encoding = conn.getContentEncoding();
InputStream inStr = null;
// create the appropriate stream wrapper based on
// the encoding type
if (encoding != null && encoding.equalsIgnoreCase("gzip")) {
inStr = new GZIPInputStream(conn.getInputStream());
} else if (encoding != null && encoding.equalsIgnoreCase("deflate")) {
inStr = new InflaterInputStream(conn.getInputStream(),
new Inflater(true));
} else {
inStr = conn.getInputStream();
}
To also set the user-agent add the following code:
conn.setRequestProperty ( "User-agent", "my agent name");
Well, you could go with the built-in libraries such as URL and URLConnection, but they don't give very much control.
Personally I'd go with the Apache HTTPClient library.
Edit: HTTPClient has been set to end of life by Apache. The replacement is: HTTP Components
All the above mentioned approaches do not download the web page text as it looks in the browser. these days a lot of data is loaded into browsers through scripts in html pages. none of above mentioned techniques supports scripts, they just downloads the html text only. HTMLUNIT supports the javascripts. so if you are looking to download the web page text as it looks in the browser then you should use HTMLUNIT.
You'd most likely need to extract code from a secure web page (https protocol). In the following example, the html file is being saved into c:\temp\filename.html Enjoy!
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URL;
import javax.net.ssl.HttpsURLConnection;
/**
* <b>Get the Html source from the secure url </b>
*/
public class HttpsClientUtil {
public static void main(String[] args) throws Exception {
String httpsURL = "https://stackoverflow.com";
String FILENAME = "c:\\temp\\filename.html";
BufferedWriter bw = new BufferedWriter(new FileWriter(FILENAME));
URL myurl = new URL(httpsURL);
HttpsURLConnection con = (HttpsURLConnection) myurl.openConnection();
con.setRequestProperty ( "User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0" );
InputStream ins = con.getInputStream();
InputStreamReader isr = new InputStreamReader(ins, "Windows-1252");
BufferedReader in = new BufferedReader(isr);
String inputLine;
// Write each line into the file
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
bw.write(inputLine);
}
in.close();
bw.close();
}
}
To do so using NIO.2 powerful Files.copy(InputStream in, Path target):
URL url = new URL( "http://download.me/" );
Files.copy( url.openStream(), Paths.get("downloaded.html" ) );
On a Unix/Linux box you could just run 'wget' but this is not really an option if you're writing a cross-platform client. Of course this assumes that you don't really want to do much with the data you download between the point of downloading it and it hitting the disk.
Get help from this class it get code and filter some information.
public class MainActivity extends AppCompatActivity {
EditText url;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate( savedInstanceState );
setContentView( R.layout.activity_main );
url = ((EditText)findViewById( R.id.editText));
DownloadCode obj = new DownloadCode();
try {
String des=" ";
String tag1= "<div class=\"description\">";
String l = obj.execute( "http://www.nu.edu.pk/Campus/Chiniot-Faisalabad/Faculty" ).get();
url.setText( l );
url.setText( " " );
String[] t1 = l.split(tag1);
String[] t2 = t1[0].split( "</div>" );
url.setText( t2[0] );
}
catch (Exception e)
{
Toast.makeText( this,e.toString(),Toast.LENGTH_SHORT ).show();
}
}
// input, extrafunctionrunparallel, output
class DownloadCode extends AsyncTask<String,Void,String>
{
#Override
protected String doInBackground(String... WebAddress) // string of webAddress separate by ','
{
String htmlcontent = " ";
try {
URL url = new URL( WebAddress[0] );
HttpURLConnection c = (HttpURLConnection) url.openConnection();
c.connect();
InputStream input = c.getInputStream();
int data;
InputStreamReader reader = new InputStreamReader( input );
data = reader.read();
while (data != -1)
{
char content = (char) data;
htmlcontent+=content;
data = reader.read();
}
}
catch (Exception e)
{
Log.i("Status : ",e.toString());
}
return htmlcontent;
}
}
}
Jetty has an HTTP client which can be use to download a web page.
package com.zetcode;
import org.eclipse.jetty.client.HttpClient;
import org.eclipse.jetty.client.api.ContentResponse;
public class ReadWebPageEx5 {
public static void main(String[] args) throws Exception {
HttpClient client = null;
try {
client = new HttpClient();
client.start();
String url = "http://example.com";
ContentResponse res = client.GET(url);
System.out.println(res.getContentAsString());
} finally {
if (client != null) {
client.stop();
}
}
}
}
The example prints the contents of a simple web page.
In a Reading a web page in Java tutorial I have written six examples of dowloading a web page programmaticaly in Java using URL, JSoup, HtmlCleaner, Apache HttpClient, Jetty HttpClient, and HtmlUnit.
I used the actual answer to this post (url) and writing the output into a
file.
package test;
import java.net.*;
import java.io.*;
public class PDFTest {
public static void main(String[] args) throws Exception {
try {
URL oracle = new URL("http://www.fetagracollege.org");
BufferedReader in = new BufferedReader(new InputStreamReader(oracle.openStream()));
String fileName = "D:\\a_01\\output.txt";
PrintWriter writer = new PrintWriter(fileName, "UTF-8");
OutputStream outputStream = new FileOutputStream(fileName);
String inputLine;
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
writer.println(inputLine);
}
in.close();
} catch(Exception e) {
}
}
}