I am trying to create a simple java command line code that will accept the URL from a playlist in command line and it should return the playlist content.
I am getting the following response back Enter playlist url here (0 to quit):
http://gv8748.lu.edu:8084/sweng987/simple-01/playlist.m3u8
java.net.MalformedURLException: no protocol: http://lu8748.lu.edu:8084/sweng987/simple-01/playlist.m3u8
at java.base/java.net.URL.<init>(URL.java:627)
at java.base/java.net.URL.<init>(URL.java:523)
at java.base/java.net.URL.<init>(URL.java:470)
at edu.lu.sweng987.SimplePlaylist.getPlaylistUrl(SimplePlaylist.java:36)
at edu.lu.sweng987.SimplePlaylist.main(SimplePlaylist.java:21)
My code is the following
package edu.psgv.sweng861;
import java.util.Scanner;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
import java.net.*;
public class SimplePlaylist {
private SimplePlaylist() {
//don't allow instances
}
// The main function returns the URL entered
public static void main(String[] args) throws IOException{
String output = getPlaylistUrl("");
System.out.println(output);
}
private static String getPlaylistUrl(String theUrl) {
String content = "";
Scanner scanner = new Scanner(System.in);
boolean validInput = false;
System.out.println("Enter playlist url here (0 to quit):");
content = scanner.nextLine();
try {
URL url = new URL(theUrl);
URLConnection urlConnection = (HttpURLConnection) url.openConnection();
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(urlConnection.getInputStream()));
String line;
while ((line = bufferedReader.readLine()) != null) {
content += line + "\n";
}
bufferedReader.close();
} catch(Exception e) {
e.printStackTrace();
}
return content;
}
}
You have incorrectly used the parameter for the method when creating the URL instance instead of the local variable actually containing the url
Change
content = scanner.nextLine();
try {
URL url = new URL(theUrl);
to
content = scanner.nextLine();
try {
URL url = new URL(content);
Related
code JAVA
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.InputStreamReader;
public class test{
public static void main (String[] args)throws Exception{
demoReadall();
}
public static void demoReadall(){
String dates;
String res;
String status;
try{
FileInputStream fstream = new FileInputStream("RunningSystem.log");
BufferedReader br = new BufferedReader(new InputStreamReader(fstream));
String strLine;
while ((strLine = br.readLine()) != null) {
String[] LineArray = strLine.split("\\|");
if("BATCH_PAYMENT".equals(LineArray[1])){
dates = LineArray[2];
res = LineArray[3];
status = LineArray[4];
System.out.println(dates);
System.out.println (res);
System.out.println (status);
}
}
fstream.close();
} catch (Exception e) {
System.err.println("Error: " + e.getMessage());
}
}
}
How do you write the front-end code to retrieve information about this code? Please bother me.
I would like to know how to send java and html data, please advise writing and sending data.
Read about .jsp, servlets or Spring framework ;-)
I am following a tutorial on web scraping from the book "Web Scraping with Java". The following code gives me a nullPointerExcpetion. Part of the problem is that (line = in.readLine()) is always null, so the while loop at line 33 never runs. I do not know why it is always null however. Can anyone offer me insight into this? This code should print the first paragraph of the wikipedia article on CPython.
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import java.net.*;
import java.io.*;
public class WikiScraper {
public static void main(String[] args) {
scrapeTopic("/wiki/CPython");
}
public static void scrapeTopic(String url){
String html = getUrl("http://www.wikipedia.org/"+url);
Document doc = Jsoup.parse(html);
String contentText = doc.select("#mw-content-text > p").first().text();
System.out.println(contentText);
}
public static String getUrl(String url){
URL urlObj = null;
try{
urlObj = new URL(url);
}
catch(MalformedURLException e){
System.out.println("The url was malformed!");
return "";
}
URLConnection urlCon = null;
BufferedReader in = null;
String outputText = "";
try{
urlCon = urlObj.openConnection();
in = new BufferedReader(new InputStreamReader(urlCon.getInputStream()));
String line = "";
while((line = in.readLine()) != null){
outputText += line;
}
in.close();
}catch(IOException e){
System.out.println("There was an error connecting to the URL");
return "";
}
return outputText;
}
}
If you enter http://www.wikipedia.org//wiki/CPython in web browser, it will be redirected to https://en.wikipedia.org/wiki/CPython, so
use String html = getUrl("https://en.wikipedia.org/"+url);
instead String html = getUrl("http://www.wikipedia.org/"+url);
then line = in.readLine() can really read something.
I need to read the URL addresses from a file and show the download speed , title tag , and the size in KB.
I am stuck with the size , getContentLengthLong() is negative ,
I am not sure if it`s correct but I tried :
connection.setRequestProperty("Accept-Encoding", "identity");
and I need some help with the download speed .
import java.net.URL;
import java.net.URLConnection;
import java.util.Scanner;
import java.io.BufferedReader;
import java.io.DataInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
public class UrlReader {
static void readTextFromURL( String urlString ) throws IOException {
System.out.print(urlString + "\t"); // print the url
URL url = new URL(urlString);
URLConnection connection = url.openConnection();
connection.setRequestProperty("Accept-Encoding", "identity");
InputStream urlData = connection.getInputStream();
//print the title
Scanner scanner = new Scanner(urlData);
String responseBody = scanner.useDelimiter("\\A").next();
System.out.print(responseBody.substring(responseBody.indexOf("<title>") + 7, responseBody.indexOf("</title>")) + "\t");
//print the size in KB
long file_size = connection.getContentLengthLong();
if (file_size < Long.MAX_VALUE){
System.out.println(file_size/1024 + "KB");
}
// print the download speed(seconds)
urlData.close();
scanner.close();
} // end readTextFromURL()
public static void main(String[] args) {
try{
File file = new File("data.txt");
FileInputStream ft = new FileInputStream(file);
DataInputStream in = new DataInputStream(ft);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String strline;
String url; // The url from the command line or from user input.
String urlLC;
while((strline = br.readLine()) != null){
url = strline;
urlLC = url.toLowerCase();
if ( ! (urlLC.startsWith("http://") || urlLC.startsWith("ftp://") ||urlLC.startsWith("https://")||
urlLC.startsWith("file://"))) {
url = "http://" + url;
}
try {
readTextFromURL(url);
}
catch (IOException e) {
System.out.println("\n*** Sorry, an error has occurred ***\n");
System.out.println(e);
}
}
in.close();
}catch(Exception e){
System.err.println("Error: " + e.getMessage());
}
} // end main
}
I am trying to write a Java program, which establishes connection to yahoo finance, and pulls some data of the website for a specific stock.
The program terminates with the exception no line found, which is thrown at the if(input.hasNextLine()) statement. I get what the exception mean, but i Can't figure out what the error is.
I know that the problem is not in the URL construction, because the URL downloads the requested data from the web, when copied into a web browser.
hopes someone can point me in the right direction, i have been puzzled for several hours, trying to search the forum, but no luck so far.
My code looks as follows:
import java.net.URL;
import java.net.URLConnection;
import java.util.Calendar;
import java.util.GregorianCalendar;
import java.util.Scanner;
public class Download {
public Download(String symbol, GregorianCalendar end, GregorianCalendar start){
//Creates the URL
String url = "http://chart.finance.yahoo.com/table.csv?s="+symbol+
"&a="+start.get(Calendar.MONTH)+
"&b="+start.get(Calendar.DAY_OF_MONTH)+
"&c="+start.get(Calendar.YEAR)+
"&d="+end.get(Calendar.MONTH)+
"&e="+end.get(Calendar.DAY_OF_MONTH)+
"&f="+end.get(Calendar.YEAR)+
"&g=d&ignore=.csv";
try{
//Creates the URL object, and establishes connection
URL yhoofin = new URL(url);
URLConnection data = yhoofin.openConnection();
//Opens an input stream, to read from
Scanner input = new Scanner(data.getInputStream(),"UTF-8");
System.out.println(input.nextLine());
//skips the first line,
if(input.hasNextLine()){
input.nextLine();
//tries to print the data.
while(input.hasNextLine()){
String line = input.nextLine();
System.out.println(line);
}
}
//closes connection
input.close();
}
catch(Exception e){
System.err.println(e);
}
}
}
with the following main method:
import java.util.GregorianCalendar;
public class test {
public static void main(String[] args){
GregorianCalendar start = new GregorianCalendar(2015,7,10);
GregorianCalendar end = new GregorianCalendar(2016,7,10);
String symbol ="NVO";
Download test = new Download(symbol,end,start);
System.out.println("Done");
}
}
//http://chart.finance.yahoo.com/table.csv?s=ABCB4.SA&a=1&b=19&c=2017&d=2&e=19&f=2017&g=d&ignore=.csv
String url = "http://chart.finance.yahoo.com/table.csv?s="+symbol+".SA"+
"&a="+start.get(Calendar.MONTH)+
"&b="+start.get(Calendar.DAY_OF_MONTH)+
"&c="+start.get(Calendar.YEAR)+
"&d="+end.get(Calendar.MONTH)+
"&e="+end.get(Calendar.DAY_OF_MONTH)+
"&f="+end.get(Calendar.YEAR)+
"&g=d&ignore=.csv";
System.out.println(url);
try
{
URL yhoofin = new URL(url);
URLConnection data = yhoofin.openConnection();
data.connect();//not necessary
System.out.println("Connection Open! = "+data.getHeaderFields().toString());
String redirect = data.getHeaderField("Location");
if (redirect != null){
data = new URL(redirect).openConnection();
}
BufferedReader in = new BufferedReader(new InputStreamReader(data.getInputStream()));
String inputLine;
System.out.println();
in.readLine();//skip first line
while ((inputLine = in.readLine()) != null) {
System.out.println(inputLine);
lines.add(inputLine);
}
I am trying to run this code and i am facing the "Null Pointer Exception" in my program.I used try and catch but i donot know how to eliminate the problem.
Here is the code:
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import java.net.*;
import java.io.*;
import java.lang.NullPointerException;
public class WikiScraper {
public static void main(String[] args) throws IOException
{
scrapeTopic("/wiki/Python");
}
public static void scrapeTopic(String url){
String html = getUrl("http://www.wikipedia.org/"+url);
Document doc = Jsoup.parse(html);
String contentText = doc.select("#mw-content-text>p").first().text();
System.out.println(contentText);
System.out.println("The url was malformed!");
}
public static String getUrl(String url){
URL urlObj = null;
try{
urlObj = new URL(url);
}
catch(MalformedURLException e){
System.out.println("The url was malformed!");
return "";
}
URLConnection urlCon = null;
BufferedReader in = null;
String outputText = "";
try{
urlCon = urlObj.openConnection();
in = new BufferedReader(new InputStreamReader(urlCon.getInputStream()));
String line = "";
while((line = in.readLine()) != null){
outputText += line;
}
in.close();
}catch(IOException e){
System.out.println("There was an error connecting to the URL");
return "";
}
return outputText;
}
}
The Error shown is:
There was an error connecting to the URL
Exception in thread "main" java.lang.NullPointerException
at hello.WikiScraper.scrapeTopic(WikiScraper.java:17)
at hello.WikiScraper.main(WikiScraper.java:11)
You have
public static String getUrl(String url){
// ...
return "";
}
What always ends in an empty String.
Try
Document doc = Jsoup.connect("http://example.com/").get();
for example.