Java Application - Load web page and check user selections in DOM - java

I want to build a Java application through which a user can request a web page, which loads in inside the Java application (not in a browser).
After the page loads, the user can select whichever elements they like on the page, and I want to track which elements they click through the DOM.
I.E. the user clicks the image of a product, then I want to get that particular element of the DOM, with all the attributes such as src, class, id, etc.
I'd like to know if any frameworks get close to doing anything like this. Especially the clicked elements, since the web page I am pretty sure there are quite a few ways to load in inside a Java application.

I don't know if this is what you have in mind, but you can try to take a look at HtmlUnit http://htmlunit.sourceforge.net/gettingStarted.html . It is said that it is used for unit tests, but it can be used also for other purposes; one of the descriptions I found is:
"A java GUI-Less browser, which allows high-level manipulation of web pages, such as filling forms and clicking links; just getPage(url), find a hyperlink, click() and you have all the HTML, JavaScript, and Ajax are automatically processed."

Related

How would I go about making a program to click a button on a google form?

It kinda says it all in the title, I'm really new to programming so its probably very simple. But I thought I should ask for help anyway just in case I don't figure it out.
All help is appreciated!
You probably want to automate browser interactions. There's multiple ways you can do this when it comes to forms, but here's 2 ways to start you off.
You can use the Requests library to send POST requests to the server. To do this you would want to use the browser inspection tools to examine the POST request that is sent when you submit your form. You then can create a program to recreate that. This is a headless approach, meaning there is no browser involved that you can physically see your programming interacting with.
link: https://requests.readthedocs.io/en/master/
Method number 2 involves writing a program that uses a library that physically interacts with a browser. For example automates your mouse movements, page scrolls, and key presses into selected inputs on the page (in your case form input fields). One of the most popular libraries to accomplish this is Selenium. To use selenium you run an instance of your browser (Firefox and Google Chrome are supported and well documented) and then you would write code to automate visiting the forms page, selecting each form field, typing the data in the fields, and then submitting the form. To figure out how to access each area of the website form, you would want to use your browser inspection tools (Firefox browser inspection tools are better than Chrome's in my opinion) and you can figure out what each field is referred to by in the html that is used to build the page... for example upon inspecting the Name field of the form, you may find something like <form><input "id=name-field>Type your name here </input></form> in the html. You would then use a Selenium method such as driver.find_element_by_id('name-field') to access the element. You could set that element to a variable like this: name = driver.find_element_by_id('name-field') and then use a line like this name.send_keys("Billy Bob") to have Selenium type "Billy Bob" into the name field.
To have a button be clicked you simply map the button to a variable like this button = driver.find_element_by_id('button-id') and then you would do this button.click(). Note that "driver" in the above example refers to the instance of the web browser that you are automating and is created at the start of the program.
link: https://selenium-python.readthedocs.io/index.html
Method #2 is probably the route you want to take as a beginner. I hope that helps get you started.

Navigate to and learn all the web objects on a page using Java (without Selenium)

I work for a start-up, where we have a requirement to automatically navigate to a given web application and find out information about all the objects contained within a page (inclusive of any iframes inside). We are supposed to code this module in Java.
So, I used Selenium WebDriver and was successful. However, due to some reasons, we've been asked not to use Selenium, but rather Core Java to do this.
So here's my question. Let's say I want to open "http://www.google.co.in" on my Firefox browser, and I have to get the attribute values for the Search Textbox, Search button and I'm feeling Lucky button. I have to do this using Java. Where do I start?
I had an idea, which was to actually navigate to a page, read its HTML source and build an xpath query to find each element and get its attributes. But how do I accomplish this navigation using Java (or jQuery as well, if that's possible)?
It may sound as if I'm trying to build an automation tool from the scratch, but I'm just considering all possibilities.
Please help.
If you have loaded the HTML content of the page into a single string variable, you can use standard Java string mechanisms to find contents of the HTML page in your string.
This might help http://www.javaworld.com/article/2077567/core-java/java-tip-66--control-browsers-from-your-java-application.html
Don't know why you want to do in Java instead of Selenium. Selenium will be the best tool for this job, you should convince your team instead.

Retrieving contents of URL after they have been changed by javascript

I am facing a problem retrieving the contents of an HTML page using java. I have described the problem below.
I am loading a URL in java which returns an HTML page.
This page uses javascript. So when I load the URL in the browser, a javascript function call occurs AFTER the page has been loaded (onBodyLoad of HTML page) and it modifies some content (one of the div id's innerHtml) on the webpage. This change is obviously visible to me in the browser.
Now, when I try to do the same thing using java, I only get the HTML content of the page , BEFORE the javascript call has occurred.
What I want to do is, fetch the contents of the html page after the javascript function call has occurred and all this has to be done using java.
How can I do this? What should my approach be?
You need to use a server side browser library that will also execute the JavaScript, so you can get the JavaScript updated DOM contents. The default browser mechanism doesn't do this, which is why you don't get the expected result.
You should try Cobra: Java HTML Parser, which will execute your JavaScript. See here for the download and for the documentation on how to use it.
Cobra:
It is Javascript-aware. DOM modifications that occur during parsing will be reflected in the resulting DOM. However, Javascript can be disabled.
For anyone reading this answer, Scott's answer above was a starting point for me. The Cobra project is long dead and cannot handle pages which use complex JavaScript.
However there is something called HTML Unit which does just exactly what I want.
Here is a small description:
HtmlUnit is a "GUI-Less browser for Java programs". It models HTML documents and provides an API that allows you to invoke pages, fill out forms, click links, etc... just like you do in your "normal" browser.
It has fairly good JavaScript support (which is constantly improving) and is able to work even with quite complex AJAX libraries, simulating either Firefox or Internet Explorer depending on the configuration you want to use.
It is typically used for testing purposes or to retrieve information from web sites.

scrape website multiple pages using Web Client java

I am trying to scrape a website, using Web Client, i am able to get the data on the first page and parse it, but I do not know how to read the data on the second page, the website is calling a java script to navigate to the second page. Can anyone suggest me how do I get the data from the next pages?
Thanks in advance
The problem you're going to have is while you (a person) can read the JavaScript in the first page and see it is navigating to another page, having the computer do this is going to be hard.
If you could identify the block of code performing the navigation, you would then need to execute it in such a way that allowed your program to extract the URL. This again is going to be very specific to the structure of the JavaScript and would require a person to identify this.
In short, I think you're dead in the water with this one, though it serves as a good example of why the Unobtrusive JavaScript concept is so important.
This framework integrates HtmlUnit with its headless javascript enabled browser to fully support scriping multiple pages in the same WebClient session: https://github.com/subes/invesdwin-webproxy

Java android programming and java script

I'm developing an android application which takes its information from a site that use JavaScript.
I want to run one of the java script function through my android app.
in example: this site http://www.bgu.co.il/tremp.aspx has "Next page" in the bottom (in hebrew) that javascript function do. i want to get to the next page through my app.
How can i send the site the command "move to next page" or activate button's onClick event?
EDIT: I'm taking information from this site, using an XML parser (sax parser), I want to get to the next page of this site in order to parse it either. I hope I made myself clear now
You really need to explain a little more fully...
Are you opening that page and parsing it in your code, have you embedded a WebView or are you just creating an Intent which opens that page in the user's preferred web-browser?
If the latter, then you definately cannot do what you're suggesting.
If you're using a WebView I'm pretty sure you still can't access the DOM of the page in the way you want to
That means your solution is to load and parse the webpage in your code - extract the 'next' page and then do with that whatever you wish...
Check out the Using JavaScript in WebView section in the Android Developer Guide
OK - now we know you're parsing the page I can try to answer your question!
You can't "click" something on a parsed page because you have no Javascript engine - it's just an XML document - a collection of nodes made-up of text.
In this case you need to work-out how the paging system actually works - and looking at the page in something like Firebug makes that quite simple to do.
The paging system works through a FORM POST mechanism - when you click 'Next' or a page number like '2' it simply loads the same page from the server, but with POST variables set.
The key variable is "__EVENTTARGET" - it tells ASP which page to load (in Hebrew of course).
The snag is that you'll need to pass all the other POST variables from this page too - that's every INPUT with it's associated value. I notice there's one called "_EVENTVALIDATION" which appears to exist to stop people getting around this too-easily by just passing "_EVENTTARGET".
A tool like Fiddler would be useful here too - it will show what is POSTed to the server when you click NEXT - all you have to do is replicate that in your call to load the page again and you have the next page - and repeat that with each page until there's nothing left to load...
Alternatively, you could switch to using a WebView - loading the page into that would mean it does have a Javascript engine and you could automate the 'clicking' (although I don't know the specifics of how to do that, the other answers allude that it's possible?) ?

Categories

Resources