Getting 503 error while fetching data from Rest API - java

I have one POST API which fetches data with limit 25 000. There are nearly 1 600 000 records. But while fetching data in one application, it is getting stuck at 1250 000 records. What can be done?
It's working fine on postman but what I noticed is it is taking records on loop every time 25 000. Maybe that is taking time and server is not able to handle request?

My suggestion would be to do pagination in this case.
you can read records in a sorted manner and the next time when you fetch the next 25k records pass the last sorted key you read.
This is a standard way to fetch data because you can not ask for 160k records at a time from the database. that will put a lot of load on database.
also while fetching 25k you can do that in batching to avoid memory footprint.
maybe in the batches of 1k.

Related

pagination vs all data from server

In Angular 8+, If we need to display list of record, we will display result in pagination way.
We have more than 1 Million of Records and in future also record will increase.
I am using Spring Boot and MYSQL as a Database
But what would be the preferable approach
Getting all the data from server at once and handle Pagination at client side.
Get 10 Records at once and display and when User click at Next Button get the next 10 records from Server.
I think you should use Pagination as compared with all data from the server.
As you are getting all data from the server it is a costly operation as you mention your application has more than millions of records.
With the use of Pagination whenever required at that time API is called and get data based on your Pagination request per page.
I would strongly advise you to go with variant #2.
The main reason to do pagination is not really because it makes sense to only display a few entries in the UI at once. Instead, pagination allows you to only transfer the necessary entries from large data sets (such as yours). This greatly improves performance and reduces the amount of data that has to be sent from the server to the client.
Variant #1 will have very poor performance, because the client has to fetch all 1,000,000 records to then only display 10 of them. This does not make a lot of sense and goes directly against the idea and the advantages of pagination.
Variant #2 on the other hand will only fetch the entries that are actually displayed. And it will only transfer roughly 0.00001% of the data that variant #1 would.
I would use something in between, load maybe 100 or 1000 records. But with one million you browser will go out of memory and with 10 your user gets bored...

How to efficiently track unprocessed records in database?

I have a table in a database that is continuously being populated with new records that have to be simply sent to Elasticsearch.
Every 15 minutes the table accrues about 15000 records. My assignment is to create a #Scheduled job that every 15 minutes gathers unprocessed records and post them to Elasticsearch.
My question is what is the most efficient way to do it? How to track unprocessed records efficiently?
My suggestion is to employ a column INSERTED_DATE that is already in this table and each time persist the last processed INSERTED_DATE in an auxiliary table. Nevertheless, it can happen that two or more records were inserted simultaneously but only one of them was processed? Surely there must be other corner cases that discard my approach.
Could you share any thoughts about it? For me it looks like a typical problem for Data Intensive application but I face it for the 1st time in a real life.

Query result Records are taking long time

I have a scenario where a Bulk number of records, not less than 8000, needs to be fetched from DB at a time and then setting them all in a 'resultList' (an ArrayList object) before rendering the result jsp page. There is a parent-child relationship in the resultSet (where almost 3000 are parent records and rest are children) and I did iterating over the parent using one for loop, and again iterating over the child records using another for loop to set in the arraylist.
But it is taking minimum of 30 mins to iterate over the 2 for loops!
While my db query fetch takes just 1.5 min to get all records from DB. I used oracle in db connection.
My question is, How can I decrease the turnaround time of my code? How can I minimize the looping time? Is there any other possibilities to get the bulk records? please suggest.
Thanks.

Json files are not fully inserting into SQLite Database when going over a certain batch size

For this android project I'm taking in about 45 tables from SQL Server and converting it to a JSON file to be inserted into the local database on my phone. The only problem is one of these 45 tables has 30,000+ records and it's increasing the insertion time by minutes.
I've tried increasing the batch size, the amount of lines it takes at one time, to 5000 but most of my data from the previously mentioned table is not being inserted all the way. A batch size of 3750 works just fine, but I experience data loss once I hit a size of 4100. My default batch size was 250... Is there a logical explanation for this?

com.mysql.jdbc.PacketTooBigException:Packet too long to query

I am working on mysql and java application
I got requirement to save as many as possible files with some other records in DB. It should support at least 5 million records.
My question is that: Saving this much big records with files is quite difficult and I can get "Packet for query is too large" Exception.
I have to two options:
Increase the size of MAX_ALLOWED_PACKET
Store the records in limit with some job trigger (for example over 300k records, then split records in limit and save it in different queries,but not exactly sure how I can split the records).
I do not want to increase the size of MAX_ALLOWED_PACKET, as user can enter any amount of records. Let's say we increase the size to accept 500k records but later user sends a million records to insert then again will get same exception.
Is there any better solution for this?
I got the solution...rather than processing txt file directly I am zipping it first and then send for insertion and it's able to process 5 million records.
Posting here if someone facing same issue then he can try this solution.

Categories

Resources