How to read specific line value from a website? - java

Following code read whole data set from a website.But I want to read specific value record from a web site. As a example when I pass a word to a website it dynamically display weather the word which I entered was what kind of word.When I pass tree name this return biographical name.I want to read that biographical name.Not the all data set of a website.How should I change?
import java.net.*;
import java.io.*;
public class DataRecord {
public static void main(String[] args) throws Exception {
URL oracle = new URL("http://www.oracle.com/");
BufferedReader in = new BufferedReader(
new InputStreamReader(oracle.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
}
in.close();
}
}

Related

BufferedReader stuck at last input line without ending the program

I'm using BufferedReader to read data from System.in (a text file redirected context: < file.txt) then write it to the console. The problem is my program shows all lines except the last one and still works without doing any thing. If I manually end it it will write the final line.
This is my code:
public void updateText() throws IOException {
try {
InputStreamReader inputStreamReader = new InputStreamReader(System.in);
BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
String inputLine;
while ((inputLine = bufferedReader.readLine()) != null) {
System.out.println(inputLine);
}
inputStreamReader.close();
bufferedReader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
Here an alternative way of waiting on available data (Ref.: Java 11 - method BufferedReader.ready()):
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
class TestClass {
public static void main(String[] args) {
try (InputStreamReader inputStreamReader = new InputStreamReader(System.in);
BufferedReader bufferedReader = new BufferedReader(inputStreamReader)) {
String line;
// wait until data available
while (!bufferedReader.ready()){
while ((line = bufferedReader.readLine()) != null) {
System.out.println(line);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Example output:
Hello world
Hello world
Hello world
Hello world
If this only occurs in Eclipse, your issue is most likely a duplicate and bug 513713.
You don't need to read standard input line by line when you could simply transfer the data in one step, replacing the body of updateText():
System.in.transferTo(System.out);

Scanner (Getinputstream()) throws exception returns no line found from URL connection

I am trying to write a Java program, which establishes connection to yahoo finance, and pulls some data of the website for a specific stock.
The program terminates with the exception no line found, which is thrown at the if(input.hasNextLine()) statement. I get what the exception mean, but i Can't figure out what the error is.
I know that the problem is not in the URL construction, because the URL downloads the requested data from the web, when copied into a web browser.
hopes someone can point me in the right direction, i have been puzzled for several hours, trying to search the forum, but no luck so far.
My code looks as follows:
import java.net.URL;
import java.net.URLConnection;
import java.util.Calendar;
import java.util.GregorianCalendar;
import java.util.Scanner;
public class Download {
public Download(String symbol, GregorianCalendar end, GregorianCalendar start){
//Creates the URL
String url = "http://chart.finance.yahoo.com/table.csv?s="+symbol+
"&a="+start.get(Calendar.MONTH)+
"&b="+start.get(Calendar.DAY_OF_MONTH)+
"&c="+start.get(Calendar.YEAR)+
"&d="+end.get(Calendar.MONTH)+
"&e="+end.get(Calendar.DAY_OF_MONTH)+
"&f="+end.get(Calendar.YEAR)+
"&g=d&ignore=.csv";
try{
//Creates the URL object, and establishes connection
URL yhoofin = new URL(url);
URLConnection data = yhoofin.openConnection();
//Opens an input stream, to read from
Scanner input = new Scanner(data.getInputStream(),"UTF-8");
System.out.println(input.nextLine());
//skips the first line,
if(input.hasNextLine()){
input.nextLine();
//tries to print the data.
while(input.hasNextLine()){
String line = input.nextLine();
System.out.println(line);
}
}
//closes connection
input.close();
}
catch(Exception e){
System.err.println(e);
}
}
}
with the following main method:
import java.util.GregorianCalendar;
public class test {
public static void main(String[] args){
GregorianCalendar start = new GregorianCalendar(2015,7,10);
GregorianCalendar end = new GregorianCalendar(2016,7,10);
String symbol ="NVO";
Download test = new Download(symbol,end,start);
System.out.println("Done");
}
}
//http://chart.finance.yahoo.com/table.csv?s=ABCB4.SA&a=1&b=19&c=2017&d=2&e=19&f=2017&g=d&ignore=.csv
String url = "http://chart.finance.yahoo.com/table.csv?s="+symbol+".SA"+
"&a="+start.get(Calendar.MONTH)+
"&b="+start.get(Calendar.DAY_OF_MONTH)+
"&c="+start.get(Calendar.YEAR)+
"&d="+end.get(Calendar.MONTH)+
"&e="+end.get(Calendar.DAY_OF_MONTH)+
"&f="+end.get(Calendar.YEAR)+
"&g=d&ignore=.csv";
System.out.println(url);
try
{
URL yhoofin = new URL(url);
URLConnection data = yhoofin.openConnection();
data.connect();//not necessary
System.out.println("Connection Open! = "+data.getHeaderFields().toString());
String redirect = data.getHeaderField("Location");
if (redirect != null){
data = new URL(redirect).openConnection();
}
BufferedReader in = new BufferedReader(new InputStreamReader(data.getInputStream()));
String inputLine;
System.out.println();
in.readLine();//skip first line
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
lines.add(inputLine);
}

How to copy the contents of a html div from a website using Java

I'm trying to write a function in java that will basically copy and paste the html code from a div from a url. The data in question is from http://cdn.espn.com/sports/scores#completed however the data is not viewable when copied into my function using io streams. The data itsself is is viewable when I click inspect and control-f "completed-soccer" it shows up as but my code does not retrieve it at all. Here is the code i used.
package project;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
public class DownloadPage {
public static void main(String[] args) throws IOException {
// Make a URL to the web page
URL url = new URL("http://cdn.espn.com/sports/scores#completed-soccer");
// Get the input stream through URL Connection
URLConnection con = url.openConnection();
InputStream is =con.getInputStream();
BufferedReader br = new BufferedReader(new InputStreamReader(is));
String line = null;
// read each line and write to System.out
while ((line = br.readLine()) != null) {
System.out.println(line);
}
}
If you can't reach the data with normal HTTP request, you have to use more complex library, as Selenium with a Webdriver.
This library allows you to really navigate in a webpage, execute javascript and inspect all the element.
You can find a lot of information and guides.
Try with this code
public static void main(String[] args) throws IOException {
URL url = new URL("http://cdn.espn.com/sports/scores#completed-soccer");
HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
try
{
InputStream in = url.openStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
StringBuilder result = new StringBuilder();
String line;
while((line = reader.readLine()) != null) {
result.append(line);
}
System.out.println(result.toString());
}
finally
{
urlConnection.disconnect();
}
}

dynamically load JSON meta data from a URL, using gson

How can I "put" the output generated, which looks to be valid JSON, into an actual JSON object?
According to this answer, gson requires that a class be defined. Surely there's a dynamic way to load or define a class? Or a generic type?
In XML a schema or DTD would be available. Can I dynamically load or find something like that for JSON, using gson?
code:
package net.bounceme.noagenda;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;
import static java.lang.System.out;
public class NoAgenda {
public static void main(String[] args) throws MalformedURLException, IOException {
List<URL> urls = new ArrayList<>();
new NoAgenda().iterateURLs(urls);
}
private void iterateURLs(List<URL> urls) throws MalformedURLException, IOException {
urls.add(new URL("https://www.flickr.com/photos/"));
urls.add(new URL("http://www.javascriptkit.com/dhtmltutors/javascriptkit.json"));
urls.add(new URL("http://api.wunderground.com/api/54f05b23fd8fd4b0/geolookup/conditions/forecast/q/US/CO/Denver.json"));
for (URL url : urls) {
connect(url);
}
}
private void connect(URL url) throws IOException {
out.println(url);
String line = null;
StringBuilder sb = new StringBuilder();
BufferedReader in = new BufferedReader(
new InputStreamReader(url.openStream()));
while ((line = in.readLine()) != null) {
sb.append(line + "\n");
}
in.close();
out.println(sb);
// HOW DO I TURN THIS INTO AN ACTUAL JSON??
}
}
wikipedia says:
Schema and metadata
JSON Schema
JSON Schema[20] specifies a JSON-based format to define the structure
of JSON data for validation, documentation, and interaction control. A
JSON Schema provides a contract for the JSON data required by a given
application, and how that data can be modified.
The arbitrary three URL's I'm working with:
https://www.flickr.com/photos/
http://www.javascriptkit.com/dhtmltutors/javascriptkit.json
http://api.wunderground.com/api/54f05b23fd8fd4b0/geolookup/conditions/forecast/q/US/CO/Denver.json
You can use following
StringBuilder sb = new StringBuilder();
String line;
br = new BufferedReader(new InputStreamReader(is));
while ((line = br.readLine()) != null) {
sb.append(line);
}
JSONObject json = new JSONObject(sb.toString());

Get content from external web page on java

I have a following EJB:
#Stateless
public class SomeService {
public String someOperation() {
...
}
}
So, the problem is to get content of external web page (for example http://example.com/some/link/info.xml) and return it. I need something like this:
String pageContent = SomeFancyLibrary.getPageContent(someurl);
Sorry if this question has been already asked here, but I didnt find any info.
From: https://docs.oracle.com/javase/tutorial/networking/urls/readingURL.html
import java.net.*;
import java.io.*;
public class URLReader {
public static void main(String[] args) throws Exception {
URL oracle = new URL("http://www.oracle.com/");
BufferedReader in = new BufferedReader(
new InputStreamReader(oracle.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
System.out.println(inputLine);
in.close();
}
}

Categories

Resources