I've built a complex web app using angularjs, java and hibernate.
I'm using http request for saving the form data.
$http({
method : 'POST',
url : $scope.servlet_url,
headers : {'Content-Type' : 'application/json'},
data : $scope.deals
}).success(function(data) {
}).error(function(data) {
});
I'm using angular version 1.2.19.
When hitting the save button this request will be triggered with the data available in the form and it moves to the server where the data is saved in the database. Before saving into the database many validations are done and also some external data is fetched which are related to the form data. Hence, it takes some time to save. (approximately around 5 to 7 minutes based on the form data provided). After the data is saved i'm redirecting to another page based on the response provided.
But the response takes time in my case, and in the mean time the same request is triggered again without any clue. There's no place the request is called in the code, but the request is triggered. It's a bit confusing.
But the same code works fine if the save takes less than 5 minutes. If it exceeds 5 minutes, it goes into a infinite loop and the save happens as many times the request is triggered. The response for the first request hits the angular controller but the controller doesn't identify the response, means we can't handle the response in this case. If this happens the page gets struck and we manually need to refresh or move the page.
Is there any way to prevent the duplicate request in angularjs? If there is a way, could anyone please help me achieve it.
Thanks in advance.
Related
Hi I am trying to send List of object from Java controller to Angular in JSON. If the result is 20000 it is working fine. But in case of 30000 angular is getting empty body. Can any one help regarding how to get this 30000 record at 1 time. Angular service getting data and converting from JSON
Java Code return:
List<**VO>
private extractData(res: Response) {
let body = res.json(); -- Error body is coming - ""
let data = body || [];
return data;
}
There's hardly ever a good reason to load all the data in the front end with large datasets as it causes a lot of problems and generally makes your web application slow. You're doing pagination, but you're doing it only in the front end. Pagination can also be done in the back end. Instead of getting ALL items from the back end and handling pagination solely on the front end, you should only fetch all data for the current page from the back end. Extend your API with a page number and page size and make the API return the count for the total number of results as well either as part of the response body or in an HTTP header. Each time a user changes to a different page you request the data for that particular page again from the back end.
As a workaround, you can get records partially, like 10000 record from single request. Then merge the records on javascript side.
I am using REST API for searching. When an ajax call is fired, REST returns json from Java code
return JResponse.ok(searchResult).build() //searchResult is List of Custom object
In javascript I would stringfy that json and parse to show relevant data on screen.
var search = jQuery.parseJSON(JSON.stringify(data));
Now I want to secure/obfuscate json response returned from REST, so that anyone who directly hits APIs won't get readable response. I tried bson but bot able to implement it successfully. Didn't find much support on how to put collection object in bson and how to retrieve it back in JS while googling.
I will suggest you to go with tokens.
Every time when request made to server request must contain a token which change for every request as well check that the request header for ajax request. If it is an ajax request then and then only return result. Also add rule for no cross browser access.
I think if you did it your data will not be accessible to anyone by direct http request.
I went thru http://www.w3schools.com/tags/ref_httpmethods.asp to read
about read vs post.Here is the description
To clear confusion, I am just taking scenario where user creates the customer on page 1(with the submit button) and
navigates to success page(page 2).
For reload (say user press F5 on success page) point, Get request is said to be harmless where in post request
"Data will be re-submitted".
My understanding in both request (GET/POST), data will be resubmitted
. so in customer scenario, two customer will be created when user
press F5 on page whether its post or get. So as per my
understanding, Data will be re-submitted in both GET/POST request and
none is harmless.Please correct my understanding if it is wrong?
For History point. It is said in GET request ,"Parameters remain in browser history" and for POST request
"Parameters are not saved in browser history". My question is if request parameters are not saved in
browser history in post request, how on click of F5 on success page duplicate customer is created. Are they stored
at some other location instead of browser history in post request?
I'll try to explain point by point:
About GET being harmless: Method GET is supossed to de idempotent, that means: given the same url and the same parameters it always should return the same result (user=34,date=01-07-2013 should return the same page) and SHOULDN'T change anything (do nothing more than a sort of query with "user" and "date"). Of course is quite common to break this rule and actually change the internal state (do an update or the like) that is the case that you're mentioning (page1 --> page2 creating something). POST requests don't have that requirement and are meant to change the internal state.
About parameters remaining in browser history: What they really mean is that in the GET request parameters are contained in the URL itself ( mysite.com?user=34,date=01-07-2013 ) so if you save the URL you also saving the parameters. In a POST request parameters go in the body of the request rather than as part of the URL; so you're right, old browsers used to only store the URL, nowadays browsers are optimized to store those POST parameters in an internal cache.
Suppose I have the following web service call using #GET method:
#GET
#Path(value = "/user/{id}")
#Produces(MediaType.APPLICATION_JSON)
public Response getUserCache(#PathParam("id") String id, #Context HttpHeaders headers) throws Exception {
HashMap<String, Object> map = new HashMap<String, Object>();
map.put("id", id);
SqlSession session = ConnectionFactory.getSqlSessionFactory().openSession();
Cre8Mapper mapper = session.getMapper(Cre8Mapper.class);
// slow it down 5 seconds
Thread.sleep(5000);
// get data from database
User user = mapper.getUser(map);
if (user == null) {
return Response.ok().status(Status.NOT_FOUND).build();
} else {
CacheControl cc = new CacheControl();
// save data for 60 seconds
cc.setMaxAge(60);
cc.setPrivate(true);
return Response.ok(gson.toJson(user)).cacheControl(cc).status(Status.OK).build();
}
}
To experiment, I slow down the current thread 5 seconds before fetching data from my database.
When I call my web service using Firefox Poster, within 60 seconds it seemed much faster on the 2nd, 3rd calls and so forth, until it passed 60 seconds.
However, when I paste the URI to a browser (Chrome), it seemed to slow down 5s everytime. And I'm really confused about how caching is actually done with this technique. Here are my questions:
Does POSTER actually look at the header max-age and decide when to
fetch the data?
In client side (web, android....),
when accessing my web service do I need to check the header and then
perform caching manually or the browser already cached the data
itself?
Is there a way to avoid fetching data from the database
every time? I guess I would have to store my data in memory somehow,
but could it potentially run out of memory?
In this tutorial
JAX-RS caching tutorial:
How does caching actually work? The first line always fetch the data from the database:
Book myBook = getBookFromDB(id);
So how it is considered cached? Unless the code doesn't execute in top/down order.
#Path("/book/{id}")
#GET
public Response getBook(#PathParam("id") long id, #Context Request request) {
Book myBook = getBookFromDB(id);
CacheControl cc = new CacheControl();
cc.setMaxAge(86400);
EntityTag etag = new EntityTag(Integer.toString(myBook.hashCode()));
ResponseBuilder builder = request.evaluatePreconditions(etag);
// cached resource did change -> serve updated content
if (builder == null){
builder = Response.ok(myBook);
builder.tag(etag);
}
builder.cacheControl(cc);
return builder.build();
}
From your questions i see that you're mixing client side caching(http) with server side caching(database). I think the root cause for this is the different behavior you observed in firefox and chrome first i will try to clear this
When I call my web service using Firefox Poster, within 60 seconds it
seemed much faster on the 2nd, 3rd calls and so forth, until it passed
60 seconds. However, when I paste the URI to a browser (Chrome), it
seemed to slow down 5s everytime.
Example :
#Path("/book")
public Response getBook() throws InterruptedException {
String book = " Sample Text Book";
TimeUnit.SECONDS.sleep(5); // thanks #fge
final CacheControl cacheControl = new CacheControl();
cacheControl.setMaxAge((int) TimeUnit.MINUTES.toSeconds(1));
return Response.ok(book).cacheControl(cacheControl).build();
}
I have a restful webservice up and running and url for this is
http://localhost:8780/caching-1.0/api/cache/book - GET
FireFox :
First time when i accessed url ,browser sent request to server and got response back with cache control headers.
Second Request with in 60 seconds (using Enter) :
This time firefox didn't went to server to get response,instead its loaded data from cache
Third Request after 60 seconds (using Enter) :
this time firefox made request to server and got response.
Fourth Request using Refresh (F5 or ctrl F5) :
If i refresh page ( instead of hitting enter) with in 60 seconds of previous request firefox didn't load data from cache instead it made request to server with special header in request
Chrome :
Second Request with in 60 seconds (using Enter ) : This time chrome sent request again to server instead of loading data from cache ,and in request it added header cache-control = "max-age=0"
Aggregating Results :
As chrome responding differently to enter click you saw different behavior in firefox and chrome ,its nothing do with jax-rs or your http response . To summarize clients (firefox/chrome/safari/opera) will cache data for specified time period in cache control , client will not make new request to server unless time expires or until we do a force refresh .
I hope this clarifies your questions 1,2,3.
4.In this tutorial JAX-RS caching tutorial: How does caching actually
work? The first line always fetch the data from the database:
Book myBook = getBookFromDB(id);
So how it is considered cached? Unless the code doesn't execute in
top/down order.
The example you referring is not talking about minimizing data base calls instead its about saving band width over network ,Client already has data and its checking with server(revalidating) if data is updated or not if there is no update in data in response you're sending actual entity .
Yes.
When using a browser like firefox or chrome, you don't need to worry about HTTP cache because modern browsers will handle it. For example, it uses in-memory cache when using Firefox. When using Android, it depends on how you interact with the origin server. According to WebView, it's actually a browser object, but you need to handle HTTP cache on your own if using HTTPClient.
It's not about HTTP caching but your server-side logic. the common answer is using database cache so that you don't need to hit database in every HTTP request.
Actually JAX-RS just provides you ways to work with HTTP cache headers. you need to use CacheControl and/or EntityTag to do time based cache and conditional requests. for example, when using EntityTag, the builder will handle response status code 304 which you never need to worry about.
Well, I'm Trying to Make a Data Importing Module. From the module, the user choose the .txt File with Data and then click the upload button. I want to make a Textarea or textbox (My project is a Java EE WebApp) where the webapp shows the real-progress of the upload proccess with Descriptive Messages.
I'm thinking (And i've searched) about Multiple Ajax Requests, and, Multiple Ajax Responses with one Request (The last one is not valid, as i read), but, i'm confused about the usage of AJAX in this case. It is Valid the user hit "Upload", and then, i call an AJAX Request that returns the text with the progress of the actual registry imported?
I'm thinking to use:
jQuery 1.6.2
GSon (For ajax)
Any suggestion would be appreciated
I would recommend using JBoss RichFaces 'poll' mechanism for that, or just a simple jquery script on the client side:
Ajax Poll Example with RichFaces: http://richfaces-showcase.appspot.com/richfaces/component-sample.jsf?demo=poll&skin=blueSky
JQuery (loads of examples on the web):
http://net.tutsplus.com/tutorials/javascript-ajax/creating-a-dynamic-poll-with-jquery-and-php/
jQuery AJAX polling for JSON response, handling based on AJAX result or JSON content
How about using a iframe that handles the upload form? This way it would not require the browser to update (by AJAX calls) the contents of a page that "we're already leaving". The iframe could be styled so that it's indistinguisable from other content.
AJAX-calls to a some method that keeps an eye on to some progress-variable (lets say a double that indicates percentage) is perfectly valid. Below is a barebones pseudo-example.
!PSEUDO!
double progress = 0.0d
void upload(request, response) {
// updates progress real-time
}
void ajaxProgress(request, response) {
// set progress to response
}
You may want to consider all the traffic back and forth showing real time processing information of uploaded files.