Parsing source video file links from url or page? - java

I want to know whether i can parse the source link of an video from a site.
I've seen apps doing same.
for example how can i parse video links from the site
http://www.hotstar.com/movies/2-states/1000034502/watch
i don't know how the scrapping works but any direction to the work may help.
The app i found doing same is called Videoder.
here is the screenshot i captured
ScreenShot

Related

Need help to scrape video streaming url from websites for android app

Hi I need to scrape video streaming urls from website
https://afdah.to
https://flixanity.xyz
http://streamm4u.com
I am using jsoup to extract the website content .However I have no idea how can I get the video streaming link from these websites.I tried reading their source code but couldn't find something.

Downloading Excel file from Web-page which doesn't appear in source code? - Java

The link I'm working with is: https://www.openml.org/t/31
If you scroll down to the bottom of the page, you should note an "Excel" button which proceeds to download the data into an excel spreadsheet.
I understand how to download a file from a URL using HttpURLConnection but this is a bit different. This excel page doesn't really lead anywhere. From what I found, this download doesn't have a "header" which I can use java.net.URLConnection with nor does it appear in the source code.
I am using JSoup to download some information from the webpage, and one thing I need to download is this excel page - and I've been stuck on this for several days. The "link address" of this download just takes me back to the OpenML homepage.
Although not appearing in the source code, inspect element says:
<a class="dt-button buttons-excel buttons-html5" tabindex="0" aria-controls="tasktable" href="#"><span>Excel</span></a>
But this HREF "#" takes me back to the homepage, so I can't use that as the link to download.
I'm fairly new to JSoup and HTML, am I looking in the right place or am I oblivious to some obvious information right in front of me that I'm not seeing?
Is using JSoup even needed in this case? Would Apache be more useful?
Thanks

android get news article content

I am creating a news app and have the url to the site of the articles e.g http://www.bbc.co.uk/news/technology-33379571 and I need a way to extract the content from the article.
I have tried jsoup but that gives all the html tags and there is one <main-article-body> but that gives the link to the article which I am trying to extract. I know boilerpipe does it exactly but that doesnt work with android, I am really stuck with this problem.
Any help will be much much appreciated
I have worked on few data extraction applications in.Net (c#) and have used regular expressions to extract content from news website.
The basic idea is to first extract all a href links (as needed) and then fetching details content by making web request. Finally using regular expressions to extract news body data.
Note: A problem with this process is that you will need to change your regular expressions when data source site changes.

Android Google Search by uploading Image

I am developing Android Application, in my application user will take photo using the camera, that photo has to be uploaded to this url http://images.google.com/searchbyimage I want to know how to upload the file to this URL and how to parse the response and display the images in the Grid.
Google image search is based on meta data and actual file properties. So what you want to get done may not work as you wish to as it will be completely different image.
It might work only with landmarks, for example if you take photo of big ben and search picture on google that might find things related to Big Ben. Same with brands, if you take picture of Coke bottle that might work. But if you take picture of say somebody or some object that will not work I am afraid.

Java prog to capture images from a webpage and output their links to an Excel file

I want a Java app that would capture all the images (and preferably data in other tags too) from a webpage and write their links to an excel file.
While I know my way around Excel files and Java, I was just wondering if there's any way to capture images from web pages.
A quick google search didnt help
Obviously there is.
Since images are in the source code, you can start from the simpliest solution - getting the page source, retrieve image links and download them.
KISS ;-)
Probably you need to parse the html of the webpage and get the links referring to images from respective html tags.

Categories

Resources