How can Perl interact with an ajax form - java

I'm writing a perl program that was doing a simple get command to retrieve results and process them. But the site has been updated and now has a java component that handles the results (so the actual data is not in the source code anymore).
This is the site:
http://wro.westchesterclerk.com/legalsearch.aspx
Try putting in:
Index Number: 11103
Year: 2009
I want to be able to pro grammatically enter the "index number" and "year" at the bottom of the form where it says "search by number" and then retrieve the results listed next to it.
I've written many programs in Perl that simply pass variables via the URL and the results are listed in the source code, so it's easy to parse. (Using LWP:Simple)
Like:
$html = get("http://www.url.com?id=$somenum&year=$someyear")
But this is totally new to me and I don't know where to begin.
I'm somewhat familiar with LWP:UserAgent and Mechanize.
I'd really appreciate any help.
Thanks!

That sort of question gets asked a lot. The standard answer is Wireshark.
I was just using it on that website with the test data you gave and extracted a single responsible POST request. This lets you bypass Javascript altogether.

It might be more logical for you to use one of the modules which drives a browser. Something like Mozilla::Mechanize or the Selenium tools.
A browser knows best how to interact with the server using AJAX and re-render the DOM and so on, so build your script on top of that ability.

What your asking to do in this case is hard. Not impossible but hard.
method A:
You can sift through their javascript code. What their "ajax" is doing is making a get/post request to another web page and dynamically loading the results. If you can decipher what that link is and the proper arguments you can continue to use get. I would recoment Getting the firebug plugin and any other tool that will help you de-obfuscate their javascript.
Another Method:
If your program could access a web browser(with javascript url support. like firefox). You could programatticaly go to these addresses, then wait a moment and get your data.
http://wro.westchesterclerk.com/legalsearch.aspx
javascript: function go() { document.getElementById('ctl00_tbSearchArea__ctl1_cphLegalSearch_splMain_tmpl0_tbLegalSearchType__ctl0_txtInde xNo').value=11109; document.getElementById('ctl00_tbSearchArea__ctl1_cphLegalSearch_splMain_tmpl0_tbLegalSearchType__ctl0_txtYear').value='09';searchClick();} go();
This is a method we have used along with mozembed to programatically get around this stuff. Recently we switched to Web Kit. And to remove this from taking up a video display we have used Xvfb/Xvnc to create a virtual desktop to load the browser in.
Those are the methods I have came up with so far. Let me know if you come up with another. Also I hope I helped.

Related

How can i extract a dynamic string/word from a website using Java

Hello everyone here is my problem.
I want to extract 2 words from a website, the words are "won" or "loss". If i can find those 2 words on the website i will be able to write the program i am working on.The problems i have are...
When i write a java program to get the html code from the site it only gives me the html code that is not changing ie: it doesnt giving the dynamic php code parts.
When i "inspect elements" on the website it gives me exactly what i want. It says i either won or loss in the html tags . However if i simply view source it doesn't show me that dynamic php code that u would see when inspecting elements.
Is there a way for me to write code that looks at "inspect elements" for the website and keep track of the part of the html code that is changing between "win" or "loss"?
I've had trouble with something like this before and since you lack details I will give you the best answer I can...
More information that will be helpful to know maybe if you edit include,
Code... Show me what you got
The html code
APIs or frameworks used in you application
So the issue seems like when you request the site the information is not there. Normally this doesn't happen since most webpage display information at load time.
These days we do a lot of stuff with Javascript so therefore that is probably the part you are having problems with. Javascript can load information onto the page dynamically at anytime. It need not me at load time and even if it looks like it by eye that its there when the page loads it may not be since it's too fast to notice.
Look into the javascript code and see if you can find a get, post, or put action and see if you can follow that to where it loads the page. Then mimic the request in your program.

Chrome/Firefox web browser automation for collect data

I would like to browse automatically in a website to collect some data.
There's a page with a form. The form consists of a select and a submit button. Selecting an option of the select and clicking on the submit button leads to another page where there's some tables with related data.
I need to collect and save in file this data for each option. Probably I will need to go back to the first page to repeat the task for each option. The detail is that I don't know the exactly number of options previously.
My idea is to do that task, preferably, with Firefox or Chrome. I think that the only way to do that is via programming.
Someone could indicate me a way to do that task in a easy and fast way. I know a little bit about Java, Javascript and Python.
You might want to google "web browser automation" tool like Selenium. Although not entirely fit for the purpose I think it can be used to implement your requirement.
Since the task is relatively well constrained, I would avoid Selenium (it's a little brittle), and instead try this approach:
Get a comprehensive list of options from the first page, record that in a text file
Capture, using a network monitoring tool like Fiddler, the traffic that is sent when you submit the first page. See what exactly is submitted to the server - and how (POST vs GET, parameter encoding, etc).
Use a tool like curl to replay the request steps in the exact format that you captured in step 2. Then write a batch script (using bash or python) to run through all the values in the text file from step 1 to do curl for all the values in the dropdown. Save curl output to files.
I found a solution to my problem. It's called HtmlUnit:
http://htmlunit.sourceforge.net/gettingStarted.html
HtmlUnit is a "GUI-Less browser for Java programs".
It allows to web browsing and data collecting using Java and it's very simple and easy to use.
Not exactly what I asked, but it's better. At least to me.

scrape website multiple pages using Web Client java

I am trying to scrape a website, using Web Client, i am able to get the data on the first page and parse it, but I do not know how to read the data on the second page, the website is calling a java script to navigate to the second page. Can anyone suggest me how do I get the data from the next pages?
Thanks in advance
The problem you're going to have is while you (a person) can read the JavaScript in the first page and see it is navigating to another page, having the computer do this is going to be hard.
If you could identify the block of code performing the navigation, you would then need to execute it in such a way that allowed your program to extract the URL. This again is going to be very specific to the structure of the JavaScript and would require a person to identify this.
In short, I think you're dead in the water with this one, though it serves as a good example of why the Unobtrusive JavaScript concept is so important.
This framework integrates HtmlUnit with its headless javascript enabled browser to fully support scriping multiple pages in the same WebClient session: https://github.com/subes/invesdwin-webproxy

How to fill out form data on a website

I am looking to develop an app that will take login details from the user, go to a website, login, return values on the web page and then display them to the user on the phone.
Does java have this functionallity? Will I need to use javascript instead maybe? do these answers depend on the website that I am trying to access?
In my head I figure that I could just read in the paramaters as strings or chars, parse the webpage for the appropriate form and "paste" the appropriate value into the form "box". However, I have never attempted anything like this with coding so I am completely new to the idea and dont really know where to start. I tried googling around but any information that I found was either irrelevant or conflicting.
I'm not looking for the code to do it because I will not really learn anythig from that but a finger in the right direction would be great. I really do want to try get better at programming so that's why I've started to give myself these little side projects
Any help that can be offered would be great
Ian,
You can try using http-client (http://hc.apache.org/httpclient-3.x/) lib from apache. It lets to pro grammatically access a website (from a Java code). You will need to do the following things
Use the http-client lib to POST the data to the web site.
Receive the html response.
Use some html parser or xpath to retrieve the values from the response html.
You would need a script which accesses the webpage and enters the data, but in my opinion this is illegal. Because you are accessing a secured area and are able to look into sensitive data. Also accessing the page via a script is "botting" - most pages have safety precautions to prevent the execution of scripts, because most of them are harmful.
In my opinion there is no legal and easy solution to this.

autogenerate HTTP screen scraping Java code

I need to screen scrape some data from a website, because it isn't available via their web service. When I've needed to do this previously, I've written the Java code myself using Apache's HTTP client library to make the relevant HTTP calls to download the data. I figured out the relevant calls I needed to make by clicking through the relevant screens in a browser while using the Charles web proxy to log the corresponding HTTP calls.
As you can imagine this is a fairly tedious process, and I'm wodering if there's a tool that can actually generate the Java code that corresponds to a browser session. I expect the generated code wouldn't be as pretty as code written manually, but I could always tidy it up afterwards. Does anyone know if such a tool exists? Selenium is one possibility I'm aware of, though I'm not sure if it supports this exact use case.
Thanks,
Don
I would also add +1 for HtmlUnit since its functionality is very powerful: if you are needing behaviour 'as though a real browser was scraping and using the page' that's definitely the best option available. HtmlUnit executes (if you want it to) the Javascript in the page.
It currently has full featured support for all the main Javascript libraries and will execute JS code using them. Corresponding with that you can get handles to the Javascript objects in page programmatically within your test.
If however the scope of what you are trying to do is less, more along the lines of reading some of the HTML elements and where you dont much care about Javascript, then using NekoHTML should suffice. Its similar to JDom giving programmatic - rather than XPath - access to the tree. You would probably need to use Apache's HttpClient to retrieve pages.
The manageability.org blog has an entry which lists a whole bunch of web page scraping tools for Java. However, I do not seem to be able to reach it right now, but I did find a text only representation in Google's cache here.
You should take a look at HtmlUnit - it was designed for testing websites but works great for screen scraping and navigating through multiple pages. It takes care of cookies and other session-related stuff.
I would say I personally like to use HtmlUnit and Selenium as my 2 favorite tools for Screen Scraping.
A tool called The Grinder allows you to script a session to a site by going through its proxy. The output is Python (runnable in Jython).

Categories

Resources