There have been several Play Framework pagination questions but all have been using JPA or ebean. I need to paginate data I am getting returned from a web service. Is there a way to do this with the Play Pagination Module or am I stuck with jQuery? I am also new to play and Java coming from asp.net MVC. The web service is returning a List of whatever model I am querying.
You should not paginate results from web service in the Play's controller as it would be not optimal, consider 3 scenarios (in that order)
Let's say that you want to display 10 items at once, but your generator returns 100.000 for sample query (which means 10.000 of pages)
Pagination should be done by data generator (web service in this case), so you should sent a query containg data: what are you looking for, how big page you want to get and which page you need ie: ?q=pagination&size=10&page=123. This will respond with List of 10 items on page 123. If you have possibility to change the web service to add pagination it's best choice.
If web service doesn't offer pagination get all results at once and use jQuery to paginate it. So if you'll get set of 100.000 items put all of them to the client and make sure that every page change won't cause querying the web service. Poor option, but still better than next one.
You can iterate returned results to split it easily to the pages, count total amount, etc, etc. If you'll use controller for paginating it, at every page change it will be fetching 100.000 items and then splitting to display just 10 of them. Drama :)
So if you can't use first option and don't want to use jQuery make some caching of the results on your server (ie, by storing results in the database as a separate rows) - in such case you will be able to use Ebean for local searching and paging.
Related
What is a good way to extend org.teiid.translator.ws to read a complete set of records by iterating over all pages returned by a paginated webservice?
Since the pagination of results is not part of any REST API standard (unlike OData), you would have to extend the current translator and provide that custom behaviour to scroll through the pagination. Unlike JDBC kind of resultset scrolling, you would need to devise a way to execute the URL with your offsets each time the Teiid engine asks for next batch of results. If you want an example take look at OData translator for similar flow.
I’m creating a web application
Frontend - reactjs and backend java.
Frontend and backend communicate with each other via rest.
On UI I show a list of items. And I need to filter them for some params.
Option 1: filter logic is on front end
In this case I just need to make a get call to backend and get all items.
After user choose some filter option filtering is happening on ui.
Pros: for that I don’t need to send data to back end and wait for response. Speed of refreshing the list should be faster.
Cons: If I will need multiple frontend clients. Let’s say a mobile app. Than I need to create filters again on this app too.
Option 2: filter logic is on back end
In this case I get all list items when app is loading. After user changes the filter options I need to send a get request with filters params and wait for response.
After that update a list of items on UI.
Pros: filter logic is written only once.
Cons: Speed probably will be much slower. Because it takes time to send request and get a result back.
Question: Where the filter logic should be? In frontend or in backend? Or maybe what is a best practice?
Filter and limit on the back end. If you had a million records, and a hundred thousand users trying to access those records at the same time, would you really want to send a million records to EVERY user? It'd kill your server and user experience (waiting for a million records to propagate from the back end for every user AND then propagate on the front end would take ages when compared to just getting 20-100 records and then clicking a (pagination) button to retrieve the next 20-100). On top of that, then to filter a million records on the front-end would, again, take a very long time and ultimately not be very practical.
From a real world stand point, most websites have some sort of record limit: Ebay = 50-200 records, Amazon = ~20, Target = ~20... etc. This ensures quick server responses and a smooth user experience for every user.
This depends on the size of your data.
For eg: If you are having a large amount of data, it is better to implement the filter logic on the backend and let the db perform the operations.
In case, you have less amount of data, you can do the filter logic on the front end after getting the data.
Let us understand this by an example.
Suppose you have an entity having 1,00,000 records and you want to show it in a grid.
In this case it is better to get 10 records on every call and show it in a grid.
If you want to perform any filter operation on this, it is better to make a query for the db on the backend and get the results
In case it you have just 1000 records in your entity, it will be beneficial to get all the data and do all the filter operations on the frontend.
Most likely begin with the frontend (unless you're dealing with huge amounts of data):
Implement filtering on the frontend (unless for some reason it's easier to do it on the backend, which I find unlikely).
Iterate until filtering functionality is somewhat stable.
Analyze your traffic, see if it makes sense to put the effort into implementing backend filtering. See what percentage of requests are actually filtered, and what savings you'd be getting from backend filtering.
Implement (or not) backend filtering depending on the results of #3.
As a personal note, the accepted answer is terrible advice:
"If you had a million records, and a hundred thousand users trying to access those records at the same time"; nothing is forcing the hundred thousand users to use filtering, your system should be able to handle that doomsday scenario. Backend filtering should be just an optimization, not a solution.
once you do filtering on the backend you'll probably want to do pagination as well; this is not a trivial feature if you want consistent results.
doing backend filtering is likely to become much more complex than just frontend filtering; you should be aware that you're going to spend a significant amount of time (not only for the initial implementation but also for ongoing maintenance) and ask yourself if it's not premature optimization.
TL/DR: Do wherever is easier for you and don't worry about it until it makes sense to start optimizing.
It depends on the specific requirements of your application, but in my opinion the safer bet would be the back-end.
Considering you need filtering in the first place, I assume you have enough data so that paging through it is required. In this case, you need to have the filtering on the back-end.
Lets say you have a page size of 20. After you apply the filter you would expect to have a page of 20 entities that match that specific filtering criteria in the UI. This can't be achieved if you fetch 20 entities, store them in the front-end and afterwards apply the filter on them.
Also, if you have enough data, fetching all of it in the front-end will be impossible due to memory constraints.
I am trying to use Jaspersoft with my application using custom data-source, my use-case is something like:
My custom data-source gets the data from a rest service and the data is very huge like some 100 million rows.
What I have achieved so far is get the whole data which JasperSoft saves it in cache and generate some ad-hoc reports/charts out of it.
So, if user wants to filter anything, the data is filtered from the data present in report or cache (I am not sure about the real concept of filtering)
What I want is some kind of lazy loading.
Like to get first 1 million rows at once and generate some report out of it (Only for crossTabs/Table reports). And, when user filters, my custom-datasource (Java Code) is able to detect and read that filter in code, form a rest query parameters out of that filter, get the filtered data and fill the report. Kind of a listener on the filter but should also have the capability to re-fill the report.
Any ideas are appreciated.
Regards,
Ashit
I have a database with 20,000 records. Each record has a name. When a user wants to view a record, he can visit a webapp and type the name of the record in an inputfield. While typing, results from the database would be shown/filtered matchin what the user typed. I would like to know the basic architecture/concepts on how to program this
I'm using the following language stack:
frontend: html5/javascript (+ajax to make instant calls while user is typing)
backend: java + jdbc to connect to simple sql database
My initial idea is:
A user types text
Whenever a character is entered or removed in the inputfield, make an ajax request to the backend
The backend does a LIKE %input% query on the name field in the database
All data found by the query is send as a json string to the frontend
The frontend processes the json string and displays whatever results it finds
My two concerns are: the high amount of ajax requests to process, in conjunction with the possibly very heavy LIKE queries. What are ways to optimize this? Only search for every two characters they type/remove? Only query for the first ten results?
Do you know of websites that utilise these optimizations?
NOTE: assume the records are persons and names are like real people names, so some names are more common than others.
You can choose SPA approach - load all 20 000 names/ids to client side and then filter it in memory - it's supposed to be the fastest way with minimal load to the database and back-end
Here are possible solutions:
Restirct search to prefix search - LIKE 'prefix%' can be executed efficiently using BTREE-type index.
Measure performance of naive LIKE '%str%' solution - it you are working on B2B application, database will likely load that table in memory and do queries fast enough.
Look at documentation for your database - there could be special features for that like inverted index
as #Stepan Novikov suggested, load your data in memory and search manually
Use specialized search indexers like SOLR or ElasticSearch (likely overkill for only 20k records)
If you are feeling ninja, implement your own N-gram index.
I use GWT for UI and Hibernate/Spring for buisness-layer.Following GWT widget is used to display the records.(http://collectionofdemos.appspot.com/demo/com.google.gwt.gen2.demo.scrolltable.PagingScrollTableDemo/PagingScrollTableDemo.html).I assume the sorting is done in client side.
I do not retrieve the entire result set since its huge.
I use
principals = getHibernateTemplate().findByCriteria(criteria,
fromIndex, numOfRecords);
to retrive data.Theres no criteria for sorting in Hibernate layer.
This approach does not give the correct behaviour since it only Sorts the current dataset in the client.
What is the best solution for this problem?
NOTE : I can get the primary-Sort-column and other sort Columns using the UI framework.
May be I can sort the result using primary-sort-column in the hibernate layer?
You need to sort on the server.
Then you can either:
send the complete resultset to the client and handle pagination on the client side. The problem is that the resultset may be big to retrieve from db and sent to the client.
handle the pagination on the server side. The client and the server request only one page at a time from the db. The problem then is that you will order the same data again and again to extract page 1, page 2, etc. each time you ask the db for a specific page. This can be a problem with large database.
have a trade-off between both (for large database):
Set a limit, say 300 items
The server asks the db for the first 301 items according to the order by
The server keept the resultset (up to 301 items) in a cache
The client request the server page by page
The server handles the pagination using the cache
If there are 301 items, the client displays "The hit list contains more than 300 items. It has been truncated".
Note 1: Usually, the client doesn't care if he can't go to the last page. You can improve the solution to count for the total number of rows first (no need of order by then) so that you can display message that is better to the user, e.g. "Result contained 2023 elements, only first 300 can be viewed".
Note 2: if you request the data page by page in the database without using any order criterion, most db (at least Oracle) don't guarantee any ordering. So you may have the same item in page 1 and 2 if you make two requests to the database. The same problem happens if multiple items have the same value that is use to order by (e.g. same date). The db doesn't guarantee any ordering between element with the same value. If this is the case, I would then suggest to use the PK as the last order criterion to order by (e.g. ORDER BY date, PK) so that the paging is done in a consistent way.
Note 3: I speak about client and server, but you can adapt the idea to your particular situation.
Always have a sort column. By default it could by "name" or "id"
Use server side paging. I.e. pass the current page index and fetch the appropriate data subset.
In the fetch criteria / query use the sort column. If none is selected by the client, use the default.
Thus you will have your desired behaviour without trade-offs.
It will be confusing to the user if you sort on a partial result in the GUI, and page on the server.
Since the data set is huge, sending the entire data set to the user and do both paging and sorting there is a no-go.
That only leaves both sorting and paging on the server. You can use Criteria.addOrder() to do sorting in hibernate. See this tutorial.