Dos command to get IE Page source - java

Is it possible to get the source of a webpage, which is currently opened in IE or chrome from command line or using a java code? I believe there has to be a way. If yes, how could we fetch the exact info of it as chrome and IE support multiple tabs.
I am trying to process content from hundreds of webpages, some of them automatically refresh at regular 15 sec interval. And some do not.
Yes, i could get the webpage source by using sockets or by using an instance of URLConnection class. However, it doesn't provide the default refresh functionality of a browser. The only option will be to hit the URL multiple times, which could be avoided if the default browsers refresh functionality could be utilized.
Also, It would be great, if the reader could comment on how to fill in text boxes using a program and submit the request from the browser. Thanks.

There are several "scraping" frameworks in Java.
I personally like JSoup a lot, because it is lightweight and compact in code.
// get the source of a website in just 1 line of code.
Document doc = Jsoup.connect("http://www.google.com").get();
// print all hyperlink paths.
Elements links = doc.select("a[href$=.html]");
for (Element lnk : links) System.out.println(lnk.attr("href"));
However it does not render javascript or anything like that. It's simple, fast but stupid.
I think you may prefer to use HtmlUnit instead, which is more like an invisible webbrowser. It gives you the possibility to even simulate click events on buttons, execute javascript, ... etc. You can make it mimic Internet Explorer or Firefox.

You can use Selenium WebDrivers - set of modifications/add-on's for desktop and phone browsers that allow you to fully control them from your code - including getting source of currently loaded page(using getPageSource() method), filling input's and submitting forms, selecting text, clicking some points, and almost just every other thing's that can be done in browser.

You could use a simple HTTP Client in order to get the source of your page using commons-httpclient.
Having you set up your libraries, you can use the following code:
HttpClient client = new HttpClient();
HttpMethod method = new GetMethod(url); // http://www.google.com
client.executeMethod(method);
String result = method.getResponseBodyAsString();
In the result variable, you will get the source code of the page, in this case, the Google's main search page. So you can do whatever you want. For example, you can keep refreshing the page using a Java Thread and do whatever you want with the result.
You can find more information on Commons HTTP-Client Page

Wget for Windows may help, if you mean "terminal" and not specifically the DOS operating system. There's also something called bitsadmin (which I'm not familiar with), and I found this in a search too: Jaunt - Java Web Scraping & Automation, if that'll help. I'm not a Java guy, but hopefully that points you in the right direction.

Related

Chrome/Firefox web browser automation for collect data

I would like to browse automatically in a website to collect some data.
There's a page with a form. The form consists of a select and a submit button. Selecting an option of the select and clicking on the submit button leads to another page where there's some tables with related data.
I need to collect and save in file this data for each option. Probably I will need to go back to the first page to repeat the task for each option. The detail is that I don't know the exactly number of options previously.
My idea is to do that task, preferably, with Firefox or Chrome. I think that the only way to do that is via programming.
Someone could indicate me a way to do that task in a easy and fast way. I know a little bit about Java, Javascript and Python.
You might want to google "web browser automation" tool like Selenium. Although not entirely fit for the purpose I think it can be used to implement your requirement.
Since the task is relatively well constrained, I would avoid Selenium (it's a little brittle), and instead try this approach:
Get a comprehensive list of options from the first page, record that in a text file
Capture, using a network monitoring tool like Fiddler, the traffic that is sent when you submit the first page. See what exactly is submitted to the server - and how (POST vs GET, parameter encoding, etc).
Use a tool like curl to replay the request steps in the exact format that you captured in step 2. Then write a batch script (using bash or python) to run through all the values in the text file from step 1 to do curl for all the values in the dropdown. Save curl output to files.
I found a solution to my problem. It's called HtmlUnit:
http://htmlunit.sourceforge.net/gettingStarted.html
HtmlUnit is a "GUI-Less browser for Java programs".
It allows to web browsing and data collecting using Java and it's very simple and easy to use.
Not exactly what I asked, but it's better. At least to me.

Command line based HTTP POST to retrieve data from javascript-rich webpage

I'm not sure if this is possible but I would like to retrieve some data from a web page that uses Javascript to render data. This would be from a linux shell.
What I am able to do now:
http post using curl/lynx/wget to login and get headers from command line
use headers to get into 'secure' locations in the webpage on command line
However, the only elements that are rendered on the page are the static html. Most of the info I need are rendered dynamically with js (albeit eventually as a html as well) and don't show up on a command line browser. I understand the issue is with the lack of a js interpreter.
As such... some workarounds I thought might be possible are:
calling full browsers from command line and somehow passing the info back to stdout. this would mean that I have to be able to POST.
passing the headers (with session info, etc...) i got from curl to one of these full browsers and again dumping the output html back to stdout. it could very be a printscreen function on the window if all else fails.
a pure java solution would be OK too.
Anyone has any experience doing something similar and succeeding?
Thanks!
You can use WebDriver to do, just that you need have web browser installed. There are other solution as well such as Selenium and HtmlUnit (without browser but might behave differently).
You can find example of Selenium project at here.
WebDriver
WebDriver is a tool for writing automated tests of websites. It aims
to mimic the behaviour of a real user, and as such interacts with the
HTML of the application.
Selenium
Selenium automates browsers. That's it. What you do with that power is
entirely up to you. Primarily it is for automating web applications
for testing purposes, but is certainly not limited to just that.
Boring web-based administration tasks can (and should!) also be
automated as well.
HtmlUnit
HtmlUnit is a "GUI-Less browser for Java programs". It models HTML
documents and provides an API that allows you to invoke pages, fill
out forms, click links, etc... just like you do in your "normal"
browser.
I would recommend use WebDriver because it is not required standalone server like Selenium, while for HtmlUnit might suitable if you dont want install browser without worry about Xvfb in headless environment.
You might want to see what Selenium can do for you. It has numerous language drivers (Java included) that can be used to interact with the browser to process content typically for testing and verification purposes. I'm not exactly sure how you can get exactly what you are looking for out of it but wanted to make you aware of its existence and potential.
This is impossible unless you setup a websocket, and even like this I guess it really depends.
Could you detail your objective? For my personal curiosity :-)

How to use wkhtmltopdf in Java web application?

I am newbie in wkhtmltopdf. I am wondering how to use wkhtmltopdf with my Dynamic Web Project in Eclipse? How to integrate wkhtmltopdf with my Java dynamic web application?
Is there any tutorials available for beginners of wkhtmltopdf ?
(Basically, I would like to use wkhtmltopdf in my web application so that when user click a save button , the current page will be saved to PDF file).
First, a technical note: Because you want to use wkhtmltopdf in a web project, if and when you deploy to a Linux server machine that you access via ssh (i.e. over the network), you will need to either use the patched Qt version, or run an X server, e.g. the dummy X server xvfb. (I don't know what happens if you deploy to a server running an operating system other than Linux.)
Second, it should be really quite simple to use wkhtmltopdf from any language in a web project.
If you just want to save the server-generated version of the current page, i.e. without any changes which might have been made like the user filling on forms, or Javascript adding new DOM elements, you just need to have an extra optional argument like ?generate=pdf on the end of your URL, which will cause that page to be generated as a PDF, and then the PDF button will link to that URL. This may be a lot of work to add to each page manually if you are just using simple JSP or something, but depending on which web framework you are using, the web framework may offer some help to implement the same action on every page, if you need to implement that.
To implement this approach, you would probably want to capture the response by wrapping the response object and overridding its getWriter() and getOutputStream() methods.
Another approach is to have a button "submit and generate PDF" which will generate the next page as a PDF. This might make more sense if you have a form the user needs to fill in - I don't know. It's a design decision really.
A third approach is to use Javascript to upload the current state of the page back to the server, and process that using wkhtmltopdf. This will work on any page. (This can even be used on any site, not just yours, if you make it a bookmarklet. Just an idea that occurred to me - it may not be a good idea.)
A fourth approach is, because wkhtmltopdf can fetch URLs, to pass the URL of your page instead of the contents of the page (which will only work if the request was a HTTP GET, or if it's equivalent to a HTTP GET on the same URL). This has some small amount of overhead over capturing your own response output, but it will probably be negligible. You will also very likely need to copy the cookie(s) into a cookie jar with this approach, since presumably your user might be logged in or have an implicit session.
So as you can see there are quite a lot of choices!
Now, the question remains: when your server has the necessary HTML, from any of the above approaches, how to feed it into wkhtmltopdf? This is pretty simple. You will need to spawn an external process using either Runtime.getRuntime().exec(), or the newer API called ProcessBuilder - see http://www.java-tips.org/java-se-tips/java.util/from-runtime.exec-to-processbuilder.html for a comparison. If you are smart about it you should be able to do this without needing to create any temporary files.
One of the wkhtmltopdf websites is currently down, but the main README is available here, which explains the command line arguments.
This is merely an outline answer which gives some pointers. If you need more details, let us know what specifically you need to know.
Additional info:
If you do end up trying to call wkhtmltopdf in an external process from java (or for that matter, any language), please note that the "normal" output that you see when using wkhtmltopdf from the command line (i.e. what you would expect to see in STDOUT) is not not in STDOUT but in STDERR. I raised this issue in the project page
http://code.google.com/p/wkhtmltopdf/issues/detail?id=825
and was replied that this is by design because wkhtmltopdf supports giving the actual pdf output in STDOUT. Please see the link for more details and java code.
java-wkhtmltopdf-wrapper provides an easy API for using wkhtmltopdf in Java.
It also works out-of-the-box on a headless server with xvfb.
E.g., on a Ubuntu or Debian server:
aptitude install wkhtmltopdf xvfb
Then in Java:
Pdf pdf = new Pdf();
pdf.addPage("http://www.google.com", PageType.url);
pdf.saveAs("output.pdf");
See the examples on their Github page for more options.

How can Perl interact with an ajax form

I'm writing a perl program that was doing a simple get command to retrieve results and process them. But the site has been updated and now has a java component that handles the results (so the actual data is not in the source code anymore).
This is the site:
http://wro.westchesterclerk.com/legalsearch.aspx
Try putting in:
Index Number: 11103
Year: 2009
I want to be able to pro grammatically enter the "index number" and "year" at the bottom of the form where it says "search by number" and then retrieve the results listed next to it.
I've written many programs in Perl that simply pass variables via the URL and the results are listed in the source code, so it's easy to parse. (Using LWP:Simple)
Like:
$html = get("http://www.url.com?id=$somenum&year=$someyear")
But this is totally new to me and I don't know where to begin.
I'm somewhat familiar with LWP:UserAgent and Mechanize.
I'd really appreciate any help.
Thanks!
That sort of question gets asked a lot. The standard answer is Wireshark.
I was just using it on that website with the test data you gave and extracted a single responsible POST request. This lets you bypass Javascript altogether.
It might be more logical for you to use one of the modules which drives a browser. Something like Mozilla::Mechanize or the Selenium tools.
A browser knows best how to interact with the server using AJAX and re-render the DOM and so on, so build your script on top of that ability.
What your asking to do in this case is hard. Not impossible but hard.
method A:
You can sift through their javascript code. What their "ajax" is doing is making a get/post request to another web page and dynamically loading the results. If you can decipher what that link is and the proper arguments you can continue to use get. I would recoment Getting the firebug plugin and any other tool that will help you de-obfuscate their javascript.
Another Method:
If your program could access a web browser(with javascript url support. like firefox). You could programatticaly go to these addresses, then wait a moment and get your data.
http://wro.westchesterclerk.com/legalsearch.aspx
javascript: function go() { document.getElementById('ctl00_tbSearchArea__ctl1_cphLegalSearch_splMain_tmpl0_tbLegalSearchType__ctl0_txtInde xNo').value=11109; document.getElementById('ctl00_tbSearchArea__ctl1_cphLegalSearch_splMain_tmpl0_tbLegalSearchType__ctl0_txtYear').value='09';searchClick();} go();
This is a method we have used along with mozembed to programatically get around this stuff. Recently we switched to Web Kit. And to remove this from taking up a video display we have used Xvfb/Xvnc to create a virtual desktop to load the browser in.
Those are the methods I have came up with so far. Let me know if you come up with another. Also I hope I helped.

autogenerate HTTP screen scraping Java code

I need to screen scrape some data from a website, because it isn't available via their web service. When I've needed to do this previously, I've written the Java code myself using Apache's HTTP client library to make the relevant HTTP calls to download the data. I figured out the relevant calls I needed to make by clicking through the relevant screens in a browser while using the Charles web proxy to log the corresponding HTTP calls.
As you can imagine this is a fairly tedious process, and I'm wodering if there's a tool that can actually generate the Java code that corresponds to a browser session. I expect the generated code wouldn't be as pretty as code written manually, but I could always tidy it up afterwards. Does anyone know if such a tool exists? Selenium is one possibility I'm aware of, though I'm not sure if it supports this exact use case.
Thanks,
Don
I would also add +1 for HtmlUnit since its functionality is very powerful: if you are needing behaviour 'as though a real browser was scraping and using the page' that's definitely the best option available. HtmlUnit executes (if you want it to) the Javascript in the page.
It currently has full featured support for all the main Javascript libraries and will execute JS code using them. Corresponding with that you can get handles to the Javascript objects in page programmatically within your test.
If however the scope of what you are trying to do is less, more along the lines of reading some of the HTML elements and where you dont much care about Javascript, then using NekoHTML should suffice. Its similar to JDom giving programmatic - rather than XPath - access to the tree. You would probably need to use Apache's HttpClient to retrieve pages.
The manageability.org blog has an entry which lists a whole bunch of web page scraping tools for Java. However, I do not seem to be able to reach it right now, but I did find a text only representation in Google's cache here.
You should take a look at HtmlUnit - it was designed for testing websites but works great for screen scraping and navigating through multiple pages. It takes care of cookies and other session-related stuff.
I would say I personally like to use HtmlUnit and Selenium as my 2 favorite tools for Screen Scraping.
A tool called The Grinder allows you to script a session to a site by going through its proxy. The output is Python (runnable in Jython).

Categories

Resources