making dynamic ajax web application searchable - java

I have developed ajax web application that is constantly generating new Dynamic pages with ID ( like http://www.enggheads.com/#! question/1419242644475)
when some one add question on website.
I have made my ajax web application crawlable, I have implemented this as this Link recommended
LINK : https://developers.google.com/webmasters/ajax-crawling/
and tested that 'fetch as google' returns the html snaphots. and tested on facebook developer tools it also fetch data accurately. I've submitted a site map with all the current urls. but when we search, only some of the links of sitemap show in google search result. and google refuses to index any of the ajax links, although there are no crawl errors.
1--My Question: So what else I have to do to show all link of my apllication in google search result.
2--My Question: And One more question I have to ask is, as I explain above that this application is generating new dynamic pages so we have to regenerate the sitemap eachtime(or at set interval) when some one add a question on my web. Or is there any other significant way to handle this situation.
And dont't know how "Facebook.com" , "Stackoverflow.com" , "in.linkedin.com" manage their sitemap if they use...???

Related

What should i use to create web services app?

As an application to get a job I need to make a web app. I'm only familiar with Java SE so here comes my concerns. I need to make web service where at the beginning there will be authentication window, then I need to show the JSON data (probably parse it and show) as table or as list with button near to choose one of the row from the table to get to the next page where there user can choose a materials and so on.
I have data in JSON on server I need to pull it from there, then I need to show data which looks like this /materialDetails?ID=x where x is ID (it's probably HTTP or URI). Should I use Java REST? If yes I need to create a site in XML and then put java data inside? There're only a few tutorials on the internet and I can't find any good(sometimes the problem is in server, sometimes with dependencies). I was looking for information also on youtube but except https://www.youtube.com/watch?v=X36Dud8cS4Y I cant find anything useful? Could someone explain me this to make it at least a lil bit easier? Or just lead me to pick a specific framework. Thanks in advance
You could create a Dynamic Web Project with Tomcat and a MySQL Database for starters. You could use RESTEasy to create a WebService that gets data from your Database.
I don't know what exactly is expected from you, but this might be a good start. "Making a web app" is a bit like saying "I need to develop a java program", it is a bit vague !
I don't know REST but I think your application can be implemented with this technologies: HTML and Servlets/JSPs.
I would write an authentication page in HTML (one form element with 2 inputs and a button) which would pass credentials to a Java Server Page or a Servlet (they're equivalent). There I would build the table (another HTML element) thus producing a new HTML page.
P.S.: you're using JSON as a format so there's no need to learn XML.

How to make dynamic website searchable by search engine

I have developed dynamic website using technology like ajax,java etc, that is constantly generating new pages with ID(like http://www.enggheads.com/#!question/1419242644475) similarly like stackoverflow.com, but my sites pages is not searchable by google or any other search engine.
I want my pages show in the search result searched by search engine. How can I achieve this? I have not submitted any sitemap to google webmaster tool. is sitemap really a right solution...??? that means we have to regenerate the sitemap eachtime(or at set interval) when some one add a question on my website.
I m really very confused that how searche ngine search dynamically created pages like stackoverflow question and facebook proile.
Look up how meta tags works. Every dynamic page will have its own set tags and description.
Also it takes time for Google to index your pages.
Another reason why your website isn't shown in the results is because your key words are too common. Google indexes websites based on keywords mentioned in the meta tags. If they are very common, there will be other popular sites that are ranked above yours. Hence your site is not on the top results.
Google also takes into consideration the popularity of your website. It calls this juice. Your website juice increases and decreases based on how old your site is, and how many relevant redirections happen to and from your website.
All the points I mentioned are just a few things that come under the heading search engine optimization.
SEO is a massive concept and you will only learn it eventually as your website grows.
On the other hand, if you want Google to push your results up to the top. You can pay Google to do so. Google has the biggest advertising business.
This is because search engines can not find URLs containing /#?= . So you can rewrite you URLs. This page can help you to do this. http://weblogs.asp.net/scottgu/tip-trick-url-rewriting-with-asp-net
First of all, to be indexed by Google, first Google should FIND the url. Best way to be found is to have many backlinks(popularity). other wise you have to submit sitemap or URL to search engines.
Unfortunately the query "inurl:#!" giving zero results in Google. So Luiggi Mendoza is right about it.
You can try rewrite URLS using htaccess to make them SEO friendly.

how to integrate springREST into wicket website?

we maintain online book shop using java wicket and it also have search function for finding books in MYSQL database. If someone search for "Fiction" in the search box, the results will contain metadata of each result. If a user click on title, it will take him/her to details page.
I have a requirement to add functionality of reviews box/form in the details page using spring. we do maintian 2 other websites which needs similar sort of work. So we decided to develope this feature as a small individual and re-usable application then integrate into java Wicket or ZEND framework.
what i need to do is:-
1) get details from the form like name, email and review information - JSP or HTML, JQUERY
2) show the entered review on the web page and post those details to MYSQL (update the reviews table in db) when form submitted. springREST
Is it really possible to accomplish using SpringREST? or is there any options for this?
can anyone give some ideas on this requirement how to do?
Sure you can do this. If you implement this new feature with javascript, you can include it to both frameworks. I would suggest you use a javascript framework like AngularJS or Bootstrap as they will make calling the rest service easier.
In addition you need to deploy your rest service as an application or bundle it with your existing application that contains the wicket app.
Here you'll find an example on calling rest service with AngularJS How to access the services from RESTful API in my angularjs page?

Why search engines can't index Ajax sites directly? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I understand as a (GWT developer) that Ajax sites renders page dynamically, and for instance the site i made is single page and contained tabs to render "pages" like "home", "about us", "products", etc.
However those pages are usually incorporated with a hash (#) and that if I access say, http://example.com#HomePage or #Products it will would render the page and the contents "eventually"...
Now if I put my products page site on my crawlable static blog site for example: http://example.com#Products if I click through this site then my site will render the products eventually after some ajax calls.
However, if I check the "page source" of the site from the browser the page is still the same html "empty from ajax content"; is this the reason why ajax site can't be indexed? Search engines don't put the URL they crawl in a HTML unit so they can render the page and not just get the static page?
Anyway, I saw implementations to workaround this issue, to use a external "crawler" service as part of the ajax site, however is there no solution that does not require to setup such external service / server?
However, if I check the "page source" of the site from the browser the page is still the same html "empty from ajax content"; is this the reason why ajax site can't be indexed? Search engines don't put the URL they crawl in a HTML unit so they can render the page and not just get the static page?
Yes, sites that depend on Ajax to pull in content are depending on JavaScript to pull in content and search engine indexing bots do not (in general) execute JavaScript since:
It requires much more CPU/RAM to do so
It is very hard to determine what interactions will pull in new content and which will do other things
Anyway, I saw implementations to workaround this issue, to use a external "crawler" service as part of the ajax site, however is there no solution that does not require to setup such external service / server?
Don't depend on JavaScript in the first place. Build a site that works with regular links. Layer JavaScript on top if you want to. Use pushState and friends to update the address bar with the real URL when new content is pulled in.
In short, follow the principles of Progressive Enhancement and Unobtrusive JavaScript
First thing you should know is that crawlers don't execute javascript on the page, but there is a way to make page crawlable (to show crawler that your application use AJAX).
Example(google crawler):
You should first indicate to the crawler that your site supports the AJAX crawling scheme by adding special token to application AJAX links. After that, crawler will transform that URL and with transformed URL call your server. Server should return HTML snapshot (generated HTML) which represents the HTML content that is created when user in browser load page with AJAX. On the end you can use Fetch as Google tool to test what will google crawler receive when call your AJAX links. In depth explanation can be found here.
I don't work with GWT but maybe you can some specific solution here.

Google page index - java

does any one has idea how to get google page index in Java?
I was googleing since last 2-3 days but helpless, can any one refer me API for that or give some suggestion for how to do that
Lots of thanks in advance
For example if we search for facebook in google, we get around 22,980,000,000 results. So I want to fetch this number using JAVA
make a corresponding HTTP request from Java to Google, then parse the replied HTML code. There is a div with the ID resultStats. This div contains the number of results.
Not sure what your real requirement is, what kind of index do you want? Google export fairly a bit amount of APIs via RESTful service, some of them are packaged with JavaScript lib like Google MAP API. There are also Java client library for OAUTH authentication
The custom search API information could be found at http://code.google.com/apis/customsearch/v1/overview.html. A comprehensive list of google APIs could be accessed at https://code.google.com/apis/console

Categories

Resources