Testing responsiveness of HTML page using Java - java

I am developing an application to test whether a HTML page is responsive or not. Right now, I am assuming that using media queries is the only way to make a HTML page responsive.
But I am using a very crude logic to test it. I am parsing the HTML file and reading it for the presence of a media query statement. If its present I am declaring it as responsive, otherwise non-responsive.
Is there any other way I can go about it?
Is there any other test I can perform before declaring it as responsive or non-responsive?

Check if they are using hard coded px instead of % or em. Maybe see if text is too small or links to close together.
At the end of the day it wont be a great resource for responsive checking since there are so many factors

According to Ethan Marcotte's seminal article that introduced Responsive Web Design (http://alistapart.com/article/responsive-web-design), a responsive page will use media queries, flexible grid layouts and responsive text.
But, even if a page has these elements, it doesn't mean that it is using them correctly. A responsive page is not one that simply uses media queries.
I'm not sure that the ability to programmatically determine if a page is built responsively is even a viable goal. You can check for ingredients, but that won't tell you if the right recipe was followed.
Also, why have you tagged this question with Java?

Related

Edit and sanitize user input in a servlet when Code is allowed?

The webpage I'm working on with JSP and a Java Servlet needs to enable the user to write comments and articles which contain text but also Code of various languages (including html and javascript).
The data is stored in a mysql database and displayed later on the page.
For input, I thought to use one of the many WYSIWYG Editors out there.
Those usually produce (x)Html code for the database.
This means I need a type of sanitizing on serverside before inserting into the database since the editor could be easily circumvented and malicious code displayed onto the site (the database itself is secured by prepared statements).
What would be the best and most simple way to approach this topic?
And would it make more sense to switch to BBCode Input instead of html?
I've found several threads here around, but most don't take into account that code actually needs to be displayed on the site and most threads are several years old already.
Huge thanks in advance!
You can use KefirBB to use BBCodes or for HTML filtration.
https://github.com/kefirfromperm/kefirbb

Siebel Open UI and Selenium; changing ID's/names

I am working on a project for a client where they are going to upgrade to Siebel Open UI. With that upgrade, they also want to start implementing Selenium. The problem we are currently facing, or going to face once implementing, is that with each build the ID's/Names of HTML elements in Siebel change. Because we are talking about a lot of views and applets it's not a good solution to change the code manually each time.
What is a good solution for this problem? One solution that was offered is a correlation table where we keep track of changes in the ID's.
Xpath in this case is also not a good option because of the complicated structure of the views and applets.
I would suggest that you look into CSS Selectors. They are faster and less brittle than XPath. For ID/names that are dynamic, typically there is at least some portion of the ID that is static.
For instance,
<a id="somestatictext_12345">...
where "12345" is some dynamically generated number. In this case you can use a CSS selector like
driver.findElement(By.cssSelector("[id^='somestatictext']"));
Examples
"[id^='somestatictext']" - ID begins with "somestatictext"
"[id$='somestatictext']" - ID ends with "somestatictext"
"[id*='somestatictext']" - ID contains "somestatictext"
For more info, take a look at this CSS Selector reference.

Scraping issue (data-reactid)

I'm trying to scrape a website and compile a spreadsheet based on what data I pull.
The website I am trying to scrape is WEARVR.
I am not too experienced with scraping, but my approach would be to find unique attributes within html tags and use this to scrape what I want.
So for this website my approach would be firstly to scrape a list of URLs of the pages you are taken to upon clicking on one of the experiences, for example : https://www.wearvr.com/#game_id=game_1041, and then secondly, cycle through this list scraping the relevant attributes each time.
However I am stuck at the first step as instead of working with simple "a href" tags, I come across "data-reactid" tags which confuse the matter.
I do my scraping with iMacros but I'm pretty decent at Java now so would learn scraping in Java if need be (which seems likely as iMacros is pretty limited).
My question is, how do these "data-reactid" tags work, and as such how can I utilise them for my scraping purposes?
Additionally if this is an XY problem, please let me know and suggest a better approach.
Thanks for reading!
The simplest way to approach scraping is to treat the page like a big string (because ultimately, that is what it is). You can search within that string for certain things (like href=) to grab links. You can also intelligently assume that whatever is in the a tags is relevant to the link and grab that.
You really don't have to understand HTML, and you don't have to understand how the page or any additional css or markup work, you just need to identify what sort of identifiable string combinations are around the text you want. I will say this is probably much easier to implement in Java than using IMacro, and probably more accurate.
The other way you can handle it, which requires a little more knowledge of HTML and XML, is to treat the entire page as an XML document. This...doesn't always work with HTML, particularly if it is older or badly formed, so the string approach is easier. You get some utility out of the various XML map libraries that exist, but otherwise its similar to the above.

GWT multiple Paging (Best / easiest way)

So far I've done some tests (e.g RPC)
Next I come to the Part Multiple Paging, in what I've read so far there are so many options for this:
MVP, Layout, UIbind.
Now I really don't know which I should choose, which is easy and good.
I tried clearing my Rootpanel and placing another Widget(composite):
RootPanel.get().clear();
Place:
LoginComp login = new LoginComp();
rootPanel.add(login, 127, 125);
I don't know if this is the most professional approach. What is the best way to include my widgets as composites?
First of all, GWT is a single page application. After you have requested the application, you'll only go to the server to recieve data.
I would use a struts or SpringMVC application for the logon and request the GWT application after a successfull login. Your GWT application should have a shell. This shell has an area where you can change your views. Changing a view is initiated via a place controller.
Take a look at the mobileWebApp example contained in the GWT SDK examples.
Also, you will find a good documentation here:
MVP and Places Documentation
In my oppinion, the best way is when you add a element in your main HTML file which acts like a content wrapper e.g.
<div id="content"></div>
Each of your page can be represented as extended Panel, simultaneously as a singleton. The page that should be display will be set into that wrapper:
RootPanel.get("content").set(pagePanelX);

Java Bing Image Search

I have a small application in java which searches images using bing image search. The problem I am facing is that, its getting only first 20 images. May be because when we search on bing.com it populates first 20 images first and then its an infinite scrolling feature.
Is there any way to search more than 20 images using bing?
Cheers :)
I'm guessing this is because this site uses ajax to populate the "infinite" scrolling list as you call it.
You probably send an http request and get the initial page (btw on my browser I got 6 images accross x 4 down, i.e. 24 not 20; thinking about it maybe my client also got 20 only at first and got the last 4 w/ ajax...), and you'd need to do the paging trough by way of ajax requests.
At a glance, the xhtml and associated javascript of the page is very dense and somewhat obfuscated, It would take a while to get oriented... An alternative to analyzing this page is to instead use a packet sniffer (such as wireshark) and to capture the requests which take place when you scroll down.
Essentially this will likely expose some form of ajax request, which you can then easily emulate with java. Typically the ajax response is easy to parse whatever its nature (xml, jason, gzip...).
A possible snags to this well laid out plan is if the returned data in the ajax response is encrypted, for example where the extra images are bundled in some sort of envelope for which you'll then need to discover the format.
Depending on the actual task at hand, you may try alternatives such as automations within GreaseMonkey (on Firefox) or similar tools.
What of Bing API ?
Note that all the above approaches are akin to screen-scraping and hence quite sensitive to even minute changes in the Bing application, and, depending on effective usage and context, this could put the project in a legal grey area... A better approach may be to register and obtain a proper application ID with MS/Bing and to use the Bing API.
You are simulating a browser? Doesn't the Bing engine have an entry point for programs instead - a web service or so - which would make your task much easier.
EDIT: SDK appears to be here: http://msdn.microsoft.com/en-us/library/cc980922.aspx
Just wanted to post a direct answer to the question:
Bing uses Ajax (of course) for the infinite scroll. Each "tick" is based on a simple ajax get request, which accuires new images.
For instance, this url returns 30 results (121-151) in a "htmlraw" format based on the query "max payne".
http://www.bing.com/images/async?q=max+payne&format=htmlraw&first=121
Edit:
It works with the original url too, just add &first=NUMBER to the querystring. Example:
www.bing.com/images/search?q=payne&go=&form=QBLH&scope=images&filt=all&first=10
I am building my own bulk image collector (for a "learning project" for myself) and I found out that it is paginated like this.
FYI, Google and Bing are easy, Yahoo and Altavista (redundant, since their results are from Yahoo) are far more problematic - they don't post the directlink to the original image.
Have fun! :)
This can be done by using count parameter. For example, I tried GET "https://api.cognitive.microsoft.com/bing/v7.0/images/search?q=shoes&mkt=en-us&count=30" call and it returns 30 images.

Categories

Resources