I have a 20-column grid with anywhere from 100 to 1,000 rows.
If each cell averages 50 characters, I would estimate that a 1000-row grid would consist of 20x50x1000 characters = 1MB.
The data for this grid has to be returned by the server in one (or more) AJAX requests. The grid is un-editable... it is just a way of representing a lot of information (about human genes, in particular).
I am having a hard time deciding whether I should return this in one AJAX request or several. Do you think this is too much data (1MB) to return in the XML/JSON response of one AJAX request? Is this an anti-pattern? Or does it make sense seeing how all the data is logically part of one grid?
This is more of a design question than anything else. I appreciate any feedback.
Could you not load all the data once using a non-Ajax request and then update only the cells that change via Ajax?
Maybe it would interesting keeping the grid "state" in the server so after each field is edited you send the new contents to the server. That would increase the server usage and bandwidth used, but would make it more responsive when the user sends the "submit" command. This also will allow faster input validation (showing an error message almost after the user has modified a cell, and not half hour later).
As an improvement to be in the safe side, keep in memory (JS memory) a list of "dirty" (modified) fields and reset the value when its related ajax response tells you that the server has ack'ed the ajax call; when the user hits "submit" all fields still dirty are sent again to the server.
That said, as long as you stay away from XML, I do not think it is a load so heavy (of course that will depend of hardware and the concurrent users that you have to service).
I'd suggest you fetch the data in batches. i.e, fetch the first 20-25 rows, or just enough to fill the viewport, then gradually load the next batch as the user scrolls through or when he nears the end of the previous batch. That might make it seamless.
Fetching all the data at once might not be an option considering the number of records you might have.
Furthermore, it's not about too much data, it's about the time it takes to fetch that data. I can guarantee you that (according to how you manipulate the json response), the browser can handle any amount of data.
Related
I have some problems understanding the best concept for my problem.
My architecure is pretty basic. I have a backend with data that can be updated and clients which will load data with some filtes.
I have a backend that has the data in a EHCache.
The data model is pretty basic for example
{
id: string,
startDate: date,
endDate: date,
username: string,
group: string
}
The data can only be modified by another backend service.
When data is modified, added or deleted we have an data update event generated.
The clients are all web clients and have a Spring boot REST Service to fetch the data from the cache.
For the data request the clients sends his own request settings. There are different settings like date and text filter. For example
{
contentFilter: Filter,
startDateFilter: date,
endDateFilter: date
}
The backend use this settings to filter the data from the cache and then sends the response with the filtered data.
When the cache generates an update event every client gets notified by a websocket connection.
And then request the full data with the same request settings as before.
My problem is now that there are many cache updates happening and that the clients can have a lots of data to load if the full dataset is loaded everytime.
For example I have this scenario.
Full dataset in cache: 100 000 rows
Update of rows in cache: 5-10 random rows every 1-5 seconds
Client1 dataset with request filter: 5000 rows
Client2 dataset with request filter: 50 rows
Now everytime the client receives a update notification the client will load the complete dataset (5000 rows) and that every 1-5 seconds. If the update only happens on the same row everytime and the row isn´t loaded by the client because of his filter settings then the client would be loading the data unnecessarily.
I am not sure what would be the best solution to reduce the client updates and increase the performance.
My first thought was to just send the updated line directly with the websocket connection to the clients.
But for that I would have to know if the client "needs" the updated line. If the updates are happening on rows that the clients doesn´t need to load because of the filter settings then I would spam the client with unnecessary updates.
I could add a check on the client side if the id of the updated row is in the loaded dataset but then I would need a separate check if a row is added to the cache instead of an update.
But I am not sure if that is the best practice. And unfortunately I can not find many resources about this topic.
The most efficient things are always the most work, sadly.
I won't claim to be an expert at this kind of thing - on either the implementation(s) available or even the best practices - but I can give some food for thought at least, which may or may not be of help.
My first choice: your first thought.
You have the problem of knowing if the updated item is relevant to the client, due to the filters.
Save the filters for the client whenever they request the full data set!
Row gets updated, check through all the client filters to see if it is relevant to any of them, push out to those it is.
The effort for maintaining that filter cache is minimal (update whenever they change their filters), and you'll also be sending down minimal data to the clients. You also won't be iterating over a large dataset multiple times, just the smaller client set and only for the few rows that have been updated.
Another option:
If you don't go ahead with option 1, option 2 might be to group updates - assuming you have the luxury of not needing immediate, real-time updates.
Instead of telling the clients about every data update, only tell them every x seconds that there might be data waiting for them (might be, you little tease).
I was going to add other options but, to be honest, I don't see why you'd worry about much beyond option 1, maybe with an option 2 addition to reduce traffic if that's an issue.
'Best practice'-wise, sending down multiple FULL datasets to multiple clients multiple times a second is certainly not it.
Sending only the data relevant to each client is a much better solution, and if you can further reduce how much the client even needs to send (i.e. only their filter updates and not have them re-send something you could already have saved) is an added bonus.
Edit:
Ah, stateless server - though it's not really stateless. You're using web sockets, so the server has some kind of state for those connections. It's already stateful so option 1 doesn't really break anything.
If it's to be completely stateless, then you also can't store the updated rows of data, so you can't return those individually. You're back to what you're doing which is a full round-trip and data read + serve.
Option 3, though, if you're semi stateless (don't want to add any metadata to those socket connections) but do hold updated rows: timestamp them and have the clients send the time of their last update along with their filters - you can then return only the updated rows since their last update using their provided filters (timestamp becomes just another filter) (or maybe it is stateless, but the timestamp becomes another filter).
Either way, limiting the updated data back down to the client is the main goal if for nothing else than saving data transfer.
Edit 2:
Sounds like you may need to send two bits of data down (or three if you want to split things even further - makes life easier client-side, I guess):
{
newItems: [{...}, ...],
updatedItems: [{...}, ...],
deletedIds: [1,2...]
}
Yes, when their request for an update comes, you'll have to check through your updated items to see if any are deleted and of relevance to the client's filters, but you can send down a minimal list of ids rather than whole rows that your client can then remove.
In Angular 8+, If we need to display list of record, we will display result in pagination way.
We have more than 1 Million of Records and in future also record will increase.
I am using Spring Boot and MYSQL as a Database
But what would be the preferable approach
Getting all the data from server at once and handle Pagination at client side.
Get 10 Records at once and display and when User click at Next Button get the next 10 records from Server.
I think you should use Pagination as compared with all data from the server.
As you are getting all data from the server it is a costly operation as you mention your application has more than millions of records.
With the use of Pagination whenever required at that time API is called and get data based on your Pagination request per page.
I would strongly advise you to go with variant #2.
The main reason to do pagination is not really because it makes sense to only display a few entries in the UI at once. Instead, pagination allows you to only transfer the necessary entries from large data sets (such as yours). This greatly improves performance and reduces the amount of data that has to be sent from the server to the client.
Variant #1 will have very poor performance, because the client has to fetch all 1,000,000 records to then only display 10 of them. This does not make a lot of sense and goes directly against the idea and the advantages of pagination.
Variant #2 on the other hand will only fetch the entries that are actually displayed. And it will only transfer roughly 0.00001% of the data that variant #1 would.
I would use something in between, load maybe 100 or 1000 records. But with one million you browser will go out of memory and with 10 your user gets bored...
I have generic problem where I am loading data from backend in blocks i.e. in pages.I have created cache which stores maximum 2-3 pages at a time.
Say page 1 - 1-1000
Page 1001-2000
class page{
List<Data>, startoffset, endoffset, pageno}
Here client could be UI or any other service.
Now client is asking for data from 1-100,101-200. Till the time the range is being served from one page, I can accommodate the changes by calculating page no from supplied range.
If page no is not there, I can load that range from backend and keep it in cache.
However, I am facing issue when client request for data that overlaps over multiple blocks.
example- when client asks for Page 950-1050, then data is spanning over two pages.
Any suggestion on how to model classes/blocks in such case i.e. how to keep server side data in memory in blocks and send it to GUI.
I don't see a problem in using two (or even more) contiguous blocks fetched from DB enough to cover the region requested for ui. You can do it both eagerly and lazyly ( that is discover that you don't have enough rows and fetch extra). This is pretty normal situation for the kind of process you try to execute. Overlapping is present always in cases when user is free to choose desired region
I’m creating a web application
Frontend - reactjs and backend java.
Frontend and backend communicate with each other via rest.
On UI I show a list of items. And I need to filter them for some params.
Option 1: filter logic is on front end
In this case I just need to make a get call to backend and get all items.
After user choose some filter option filtering is happening on ui.
Pros: for that I don’t need to send data to back end and wait for response. Speed of refreshing the list should be faster.
Cons: If I will need multiple frontend clients. Let’s say a mobile app. Than I need to create filters again on this app too.
Option 2: filter logic is on back end
In this case I get all list items when app is loading. After user changes the filter options I need to send a get request with filters params and wait for response.
After that update a list of items on UI.
Pros: filter logic is written only once.
Cons: Speed probably will be much slower. Because it takes time to send request and get a result back.
Question: Where the filter logic should be? In frontend or in backend? Or maybe what is a best practice?
Filter and limit on the back end. If you had a million records, and a hundred thousand users trying to access those records at the same time, would you really want to send a million records to EVERY user? It'd kill your server and user experience (waiting for a million records to propagate from the back end for every user AND then propagate on the front end would take ages when compared to just getting 20-100 records and then clicking a (pagination) button to retrieve the next 20-100). On top of that, then to filter a million records on the front-end would, again, take a very long time and ultimately not be very practical.
From a real world stand point, most websites have some sort of record limit: Ebay = 50-200 records, Amazon = ~20, Target = ~20... etc. This ensures quick server responses and a smooth user experience for every user.
This depends on the size of your data.
For eg: If you are having a large amount of data, it is better to implement the filter logic on the backend and let the db perform the operations.
In case, you have less amount of data, you can do the filter logic on the front end after getting the data.
Let us understand this by an example.
Suppose you have an entity having 1,00,000 records and you want to show it in a grid.
In this case it is better to get 10 records on every call and show it in a grid.
If you want to perform any filter operation on this, it is better to make a query for the db on the backend and get the results
In case it you have just 1000 records in your entity, it will be beneficial to get all the data and do all the filter operations on the frontend.
Most likely begin with the frontend (unless you're dealing with huge amounts of data):
Implement filtering on the frontend (unless for some reason it's easier to do it on the backend, which I find unlikely).
Iterate until filtering functionality is somewhat stable.
Analyze your traffic, see if it makes sense to put the effort into implementing backend filtering. See what percentage of requests are actually filtered, and what savings you'd be getting from backend filtering.
Implement (or not) backend filtering depending on the results of #3.
As a personal note, the accepted answer is terrible advice:
"If you had a million records, and a hundred thousand users trying to access those records at the same time"; nothing is forcing the hundred thousand users to use filtering, your system should be able to handle that doomsday scenario. Backend filtering should be just an optimization, not a solution.
once you do filtering on the backend you'll probably want to do pagination as well; this is not a trivial feature if you want consistent results.
doing backend filtering is likely to become much more complex than just frontend filtering; you should be aware that you're going to spend a significant amount of time (not only for the initial implementation but also for ongoing maintenance) and ask yourself if it's not premature optimization.
TL/DR: Do wherever is easier for you and don't worry about it until it makes sense to start optimizing.
It depends on the specific requirements of your application, but in my opinion the safer bet would be the back-end.
Considering you need filtering in the first place, I assume you have enough data so that paging through it is required. In this case, you need to have the filtering on the back-end.
Lets say you have a page size of 20. After you apply the filter you would expect to have a page of 20 entities that match that specific filtering criteria in the UI. This can't be achieved if you fetch 20 entities, store them in the front-end and afterwards apply the filter on them.
Also, if you have enough data, fetching all of it in the front-end will be impossible due to memory constraints.
Here is my situation: I have Java EE single page application. All client-server communication is AJAX based with JSON is used as format to exchange data. One of my request takes around 1 min to calculate data required by client. Also this data is huge(Could be > 20 MB). So it is not possible to pass entire data to javascript in one go. So for this reason I am only passing few records to client and using grid to display data with paging option.
Now when user clicks on next page button, I need to get more data. My question is how do I cache data on server side ? I need this data only for one user as a time. Would you recommend caching all data one first request using session id as key ?
Any other suggestions ?
I am assuming you are using DB backend for that. I'd use limits to return small chunks of data, most DB vendors have solution for this. That would make your queries faster, and also most of JS fameworks with grid type of components will support paginating results(ExtJS for example).
If you are fetching data from 3rd party and passing it on (with some modifications or not) I'd still stick to the database and use such workflow: pool data from 3rd party, save in db, call from your widget small chunks required by customers.
Hope this helps.
The cheapest (and not so ineffective way of caching data) in a Java EE web application is to use the Session object like you intend to do. It's ineffective since it requires the developer to ensure that the cache does not leak memory; so it is upto to the developer to nullify the reference to the object once the object is no longer needed.
However, even if you wish to implement the poor man's cache, caching 20MB of data is not advisable, as it does not scale well. The scalability question rises when multiple users utilize the same functionality of the application, in which case 20MB is a lot of data.
You're better off returning paginated "datasets" in the form of JSON, based on the ValueList design pattern. Each request for the query of data will result in partial retrieval of data, which is then sent down the wire to the client. That way, you never have to cache the complete results of the query execution, and also you can return partial datasets. It is entirely upto to you, as to whether you want to cache; usually caching is done for large datasets that are utilized time and again.