If user refresh the page continuously using F5 functional key then the page loading is very slow and can be seen blank page for long time.
How to solve this problem?
I tried using cache on server side but I don't think that I am using it in proper way.
Can somebody help me with an example.
I think you need to use browser cache, which can be controlled by http headers, or meta tags.
http://www.mnot.net/cache_docs/
You need to set page cache to be around 5 seconds or some similar value so that no new request will be sent to server in that time interval.
A few things:
You could try to minimize processing time within your application, maybe by minimizing wasteful operations. Sounds like your application spends a lot of time recreating the output.
You could try to add some sort of caching on the server side, and and send the user the same page (ie no "new" processing) for some time. Depending on the mechanism, this may not be feasible though (https, security?). At least, afaik.
Of course you could change the way the site works. You could use Ajax to push information to the site the user is on, and so take the urge to refresh away from him.
And maybe your server just does not have enough power to serve a lot of users at the same time?
It is very difficult to stop user from pressing F5.
Try making your code more optimized.
Use meta tags for cache like:
cache-control
EXPIRES
PRAGMA NO-CACHE
Also check this for JSP caching.
response.setIntHeader("Refresh",5);
just use this jsp method for autorefreeshing of ur webpage...
http://www.tutorialspoint.com/jsp/jsp_auto_refresh.htm
Related
I have seen that that one of the main difference between POST and GET is that POST is not cached but GET is cached.
Could you explain me what do you mean about "cache"?
Also, if I use POST or GET server sends me response. Is there any difference? In all of cases, I have request data and response, is not it?
Thanks
To Cache (in the context of HTTP) means to store a page/response either on the client or some intermediate host - perhaps in a content distribution network. When the client requests a page, then the page can be served from the client's cache (if the client requested it before) or the intermediate host. This is faster and requires fewer resources than getting the page from the server that generated it.
One downside is that if the request changes some state on the server, that change won't happen if the page is served from a cache. This is why POST requests are usually not served from a cache.
Another downside to caching is that the cached copy may be out of date. The HTTP caching mechanisms try to prevent this.
The basic idea behind the GET and POST methods is that a GET message only retrieves information but never changes the state of the server. (Hence the name). As a result, just about any caching system will assume that you can remember the last GET response returned, and that the next one will look the same.
A POST on the other hand is a request that sends new information to the server. So not only can these not be cached (because there's no guaruantuee that the next POST won't modify things even more; think +1 like buttons for example) but they actually have to invalidate parts of the cache because they might modify pages.
As a result, your browser for example will warn you when you try to refresh a page to which you POSTed information, because you might make changes you did not want made by doing so. When GETting a page, it will not do so because you cannot change anything on the site by doing so.
(Or rather; it's your job as a programmer to make sure that nothing changes when GETting a page.)
GET is supposed to return the same result from the server and not change things at the server side and hence idempotent.
Whereas POST means it can modify something at the server(make an entry in db, delete something etc) and hence not idempotent.
And with regards to caching the data in GET has been addressed here in a nice manner.
http://www.ebaytechblog.com/2012/08/20/caching-http-post-requests-and-responses/#.VGy9ovmUeeQ
I have written a Java program, which reads numbers from different files. The numbers are added while being read from the files and the sum is displayed in a browser. The browser keeps on displaying the new sum getting created at every step.
I know how to display static values in a browser. I can use Javascripts. But I don't know what mechanism to use to display continuously a changing value.
Any help is appreciated!
You'll have to request the data to display from the server. You can use a data-binding library like Knockout to automatically update the page as the underlying model changes, or you can just use a library like jquery to modify the DOM on your own.
Alternatively, you could keep a pipe open to the server using the Comet model: http://en.wikipedia.org/wiki/Comet_%28programming%29. However, it can be expensive to eat up a thread for long periods of time on your web server.
Good luck.
Check out knockout.js http://www.knockoutjs.com/ it is a framework for updating UI automatically when data changes
There is this site wich in the address bar only shows like "http://example.com/examplepage.aspx".
Normally if it would have parameters behind it you probably could just copy that one.
But since it doesn't, how do i bookmark this page.
It doesn't necessarily have to be a bookmark, but at least an easy way to access the page.
(fyi I know basic HTML and Java, maybe it's only possible programmatically).
thnx
Generally dynamic pages (taking in context with the question) are not book mark friendly.
You could probably sniff the incoming request, and create a fake form which you can then submit later.
However there may be situations where there are parameters such as session id which are valid for only small periods of time.
You should read up on sessions. In really simple terms, a session is assigned to users accessing a website. They have an expiry period. IF you stay idle beyond set time (determined by the developer) you will not be able to get in. And every time you log back in, you may be assign a new session.
You would have noticed, that some websites automatically log you in, this is mostly done with the help of cookies. Cookies work in tandem with sessions, they store very basic information, so the next time you come back to a website, it will be able to identify you as a returning user and provide you with access.
Then again, some pages don't use sessions, they might have their own custom way of identifying users.
Bookmarks can be used in dynamic pages, if the code allows you to send GET requests, if they don't have any other extra parameters which will block you.
To Summarize:
Dynamic page not very bookmark friendly.
There may be parameters used to access a webpage which change constantly, which you cannot really save.
You may be able to get into dynamic pages using bookmarks, if they don't use any of the dynamically changing parameters.
Since you know Java, you should probably read up on JSPs/servlets to get an understanding of what happens behind the scenes in dynamic pages.
Hope this answers your questions.
I have a small application in java which searches images using bing image search. The problem I am facing is that, its getting only first 20 images. May be because when we search on bing.com it populates first 20 images first and then its an infinite scrolling feature.
Is there any way to search more than 20 images using bing?
Cheers :)
I'm guessing this is because this site uses ajax to populate the "infinite" scrolling list as you call it.
You probably send an http request and get the initial page (btw on my browser I got 6 images accross x 4 down, i.e. 24 not 20; thinking about it maybe my client also got 20 only at first and got the last 4 w/ ajax...), and you'd need to do the paging trough by way of ajax requests.
At a glance, the xhtml and associated javascript of the page is very dense and somewhat obfuscated, It would take a while to get oriented... An alternative to analyzing this page is to instead use a packet sniffer (such as wireshark) and to capture the requests which take place when you scroll down.
Essentially this will likely expose some form of ajax request, which you can then easily emulate with java. Typically the ajax response is easy to parse whatever its nature (xml, jason, gzip...).
A possible snags to this well laid out plan is if the returned data in the ajax response is encrypted, for example where the extra images are bundled in some sort of envelope for which you'll then need to discover the format.
Depending on the actual task at hand, you may try alternatives such as automations within GreaseMonkey (on Firefox) or similar tools.
What of Bing API ?
Note that all the above approaches are akin to screen-scraping and hence quite sensitive to even minute changes in the Bing application, and, depending on effective usage and context, this could put the project in a legal grey area... A better approach may be to register and obtain a proper application ID with MS/Bing and to use the Bing API.
You are simulating a browser? Doesn't the Bing engine have an entry point for programs instead - a web service or so - which would make your task much easier.
EDIT: SDK appears to be here: http://msdn.microsoft.com/en-us/library/cc980922.aspx
Just wanted to post a direct answer to the question:
Bing uses Ajax (of course) for the infinite scroll. Each "tick" is based on a simple ajax get request, which accuires new images.
For instance, this url returns 30 results (121-151) in a "htmlraw" format based on the query "max payne".
http://www.bing.com/images/async?q=max+payne&format=htmlraw&first=121
Edit:
It works with the original url too, just add &first=NUMBER to the querystring. Example:
www.bing.com/images/search?q=payne&go=&form=QBLH&scope=images&filt=all&first=10
I am building my own bulk image collector (for a "learning project" for myself) and I found out that it is paginated like this.
FYI, Google and Bing are easy, Yahoo and Altavista (redundant, since their results are from Yahoo) are far more problematic - they don't post the directlink to the original image.
Have fun! :)
This can be done by using count parameter. For example, I tried GET "https://api.cognitive.microsoft.com/bing/v7.0/images/search?q=shoes&mkt=en-us&count=30" call and it returns 30 images.
I've already read most of the questions regarding techniques to prevent form spam, but none of them seem to suggest the use of the browser's session.
We have a form that sends an email to given email address and we didn't like the idea of using "captchas" or Javascript, as we wanted to keep the user journey simple and accessible to those without Javascript.
We would like to use the session object to help prevent form spam. Our webapp is developed on Weblogic Server 10 using Struts.
The solution being, when the form loads, it would set a variable in the session object. Once you click submit, we check if the session for the variable. No variable, redirect to the form. Variable exists send the email.
I would really appreciate any opinions/reasons why this might be a bad idea, so we can evaluate this solution against others.
Many thanks,
Jonathan
There is nothing to prevent a spammer from automating the process of downloading your form (thus generating the cookie) and submitting it. It may impose a slight burden on the spammer, but a trivial one.
As an example, a form can be easily downloaded and submitted, with cookies preserved, using a command-line tool such as cURL. This can then be run from a script repeatedly.
Session objects can, depending on implementation, be relatively heavy in terms of resource usage, as well as somewhat slow. Additionally, the spammer, if they realize how you are blocking them, can simply start a new session every time they hit the form by not sending back the session cookie.
So, because that technique relies on the client to behave nicely, and the expected behavior is fairly easy to prevent, it is possibly less useful than some other ways to solve the problem.
Thank you for your reply cdeszaq, but I'm not sure if you mis-understood my question.
For the form submission to complete successfully, clients will be forced to load the form to set up the session object correctly. Only when the session is in the correct state, will it be possible to send an email.
If the spammer is not sending back the session cookie, then they will not be able to spam my form as they haven't gone to my form page that creates the right session.
I agree that using the session object would create extra resource. Our implementation would simply, (using JSP) call session.setAttribute("formLoaded", true); and in my Struts action I would simply use session.getAttribute("formLoaded"); to check.
I wonder if this might work:
Each time you render page/form, create a random bit of text
Put that text in the session
Include that text as a hidden field in the form
User submits the form
Action compares the hidden text to the value in the session - if there's a match, send the email
Since a hacker wouldn't be able to put any random value in the session, they wouldn't be able to spam. Right?