I'm developing a Java REST API that uses client data from a postgreSQL database.
The numbers:
. About 600 clients at the beginning
. Some of them doing requests every few seconds
Because clients pay per request, we need to control if their number of successful requests reach their limit, and as querying postgresql data (update the value of 'hitsCounter' field) after every request is bad in terms of performance, we are thinking about implementing a cache system with redis.
The idea:
After a client does his first request, we retrieve his data from postgresql and store it into redis cache. Then work with this cache-data, for example incrementing the 'hitsCounter' key value, till the client stops doing requests.
In parallel, every few minutes a background process persist data from redis cache to db tables, so at the end we have the updated data back to postgresql, and we can deal with them in the future.
I think it obviously increase performance, but I'm not sure about this "background process". An option is to check the TTL of the cache elements and if it's minor than some value (it means client has finished doing requests), persist the data.
I would love to hear some opinions about this. Is this a good idea? Do you know some better alternatives?
Perfectly reasonable idea, but you've not mentioned any measurements you've made. What is the bottleneck in your target hardware with your target transaction levels? Without knowing that, you can't say.
You could use an unlogged table perhaps. Just insert a row with every query, then summarise every 5 minutes, clearing out old data. Then again, with HOT updates, and say 75% fill-factor maybe updates are more efficient. I don't know (and nor do you) we haven't measured it.
Not enough? Stick it on its own tablespace on ssd.
Not enough? Stick it on its own vm/machine.
Not enough? Just write the damn stuff to flat files on each front-end box and batch the data once a minute into the database.
Also - how much are they paying per query? Do you care if power fails and you lose five seconds of query logs? Do you need to be able to reproduce receipts for each query with originating details and a timestamp?
Related
In Angular 8+, If we need to display list of record, we will display result in pagination way.
We have more than 1 Million of Records and in future also record will increase.
I am using Spring Boot and MYSQL as a Database
But what would be the preferable approach
Getting all the data from server at once and handle Pagination at client side.
Get 10 Records at once and display and when User click at Next Button get the next 10 records from Server.
I think you should use Pagination as compared with all data from the server.
As you are getting all data from the server it is a costly operation as you mention your application has more than millions of records.
With the use of Pagination whenever required at that time API is called and get data based on your Pagination request per page.
I would strongly advise you to go with variant #2.
The main reason to do pagination is not really because it makes sense to only display a few entries in the UI at once. Instead, pagination allows you to only transfer the necessary entries from large data sets (such as yours). This greatly improves performance and reduces the amount of data that has to be sent from the server to the client.
Variant #1 will have very poor performance, because the client has to fetch all 1,000,000 records to then only display 10 of them. This does not make a lot of sense and goes directly against the idea and the advantages of pagination.
Variant #2 on the other hand will only fetch the entries that are actually displayed. And it will only transfer roughly 0.00001% of the data that variant #1 would.
I would use something in between, load maybe 100 or 1000 records. But with one million you browser will go out of memory and with 10 your user gets bored...
I'm currently working on a Java project which i need to prepare a big(to me) mysql database. I have to do web scraping using Jsoup and store the results into my database as well. As i estimated, i will have roughly 1,500,000 to 2,000,000 records to be inserted. In my first trial, i just use a loop to insert these records and it takes me one week to insert about 1/3 of my required records, which is too slow i think. Is it possible to make this process multi-threaded, so that i can have my records split into 3 sets, say 500,000 records per set, and then insert them into one database( one table specifically)?
Multi-threading isn't going to help you here. You'll just move the contention bottleneck from your app server to the database.
Instead, try using batch-inserts instead, they generally make this sort of thing orders of magnitude faster. See "3.4 Making Batch Updates" in the JDBC tutorial.
Edit: As #Jon commented, you need to decouple the fetching of the web pages from their insertion into the database, otherwise the whole process will go at the speed of the slowest operation. You could have multiple threads fetching web pages, which add the data to a queue data structure, and then have a single thread draining the queue into the database using a batch insert.
Just make sure two (or more) threads doesn't use the same connection at the same time, using a connection pool resolves that. c3po and apache dbcp comes to mind ...
You can insert these records in different threads provided they do use different primary key values.
You should also look at Spring Batch which I believe will be useful in your case.
You can chunk your record set into batches and do this, but perhaps you should think about other factors as well.
Are you doing a network round trip for each INSERT? If yes, latency could be the real enemy. Try batching those requests to cut down on network traffic.
Do you have transactions turned on? If yes, the size of the rollback log could be the problem.
I'd recommend profiling the app server and the database server to see where the time is being spent. You can waste a lot of time guessing about the root cause.
I think multi thread approch usefull for your issue but you have to using a connection pool such as C3P0 or Tomca 7 Connetcion pool for more performance.
Another solution is using a batch-operation provider such as Spring-batch, exist anothers utility for batch operation also.
Another solution is using a PL/SQl Procedure with a input structure parameter.
If I have an sql table that consist of one million rows. Let's say a user table.
What type of software do I need, in order to handle 10 read/write every second. I was thinking of using a Java NIO server to handle the connections.
But how does the back-end Database work? Could I simply use MySQL on the same computer?
Any insight would be great. Links, reading, examples. books?
I know SQL. I have done alot of SQLite but never created a scalable system to handle this kind of load.
Edit update,regarding helios comment
how many reads vs. writes?: 50/50
do you need up-to-date-reads(no delay): YES?
how big is each item?: 10% is 10-15 columns and the rest is 1-3 columns
are you accessing them individually?: NO, non of the USER threads are interacting but there can be simultaneous DB read/write on same row, (just make it synchronized?)
so you need 10 transcation/second on table with million rows.
that is really neither huge data set nor high performance.
MYSQL (currently 5.5+ ,innodb engine) , running on single server,can easily handle that.
you may need read first five chapter of 'High Performance MySQL' published by oreilly.
for nosql-db, i suggest mongodb, see http://www.mongodb.org/
If you make use of JDBC Connection Pooling (like C3PO, DBCP etc.) you would be able to have parallel inserts, and you would be able to have 10 threads (or more) simultaneously inserting data. Your limit would then be your platform resources (memory, I/O etc.).
All this would hold however only if the data insertion process itself can be parallel threads (i.e. you do not have a specific requirement to insert records sequentially) and that what you are doing are simple inserts and not something complex that locks the table or causes the other transactions to wait.
Also consider using JDBC prepared statements, and also committing in batches rather than after each record. This would speed up things greatly.
I want to store data with every request (what user viewed what page of my site).
With each request I will put the data (~100 bytes) in the memcache.
Every 5 seconds I will persist that data from the memcache to the datastore.
How rare would data loss be in this scenario?
You basically can't trust memcache to keep your data at all. It's just a cache, after all, and can choose to evict data whenever it feels it is nessecary.
Having said that, in your particular scenario, the worst that will happen is you will lose 5 seconds worth of data. I don't think that's a big deal if you're just storing "pageview" data. Besides, unless the cache is running out of memory, there's really no need for it to evict data and so it's probably going to be fairly rare.
That depends on how heavily your app uses memcache for other things. Instead of this approach, though, I would suggest using task queues to store the data to the datastore for each request, without slowing the user-facing request down.
Do not trust memcached to remember your things for later. If you want something persisted then persist it there and then.
I'm currently writing java project against mysql in a cluster with ten nodes. The program simply pull some information from the database and do some calculation, then push some data back to the database. However, there are millions of rows in the table. Is there any way to split up the job and utilize the cluster architecture? How to do multi-threading on different node?
I watched an interesting presentation on using Gearman to do Map/Reduce style things on a mysql database. It might be what you are looking for: see here. There is a recording on the mysql webpage here (have to register for mysql.com though).
I'd think about doing that calculation in a stored procedure on the database server and pass on bringing millions of rows to the middle tier. You'll save yourself a lot of bytes on the wire. Depending on the nature of the calculation, your schema, indexing, etc. you might find that the database server is well equipped to do that calculation without having to resort to multi-threading.
I could be wrong, but it's worth a prototype to see.
Assume the table (A) you want to process has 10 million rows. Create a table B in the database to store the set of rows processed by a node. So you can write the Java program in such a way like it will first fetch the last row processed by other nodes and then it add an entry in the same table informing other nodes what range of rows it is going to process (you can decide this number). In our case, lets assume each node can process 1000 rows at a time. Node 1 fetches table B and finds it it empty. Then Node 1 inserts a row ('Node1', 1000) informing that it is processing till primary key of A is <=1000 ( Assuming primary key of table A is numeric and it is in ascending order). Node 2 comes and finds 1000 primary keys are processed by some other node. Hence it inserts a row ('Node2', 2000) informing others that it is processing rows between 1001 and 2000. Please note that access to table B should be synchronized, i.e. only one can work on it at a time.
Since you only have one mysql server, make sure you're using the innodb engine to reduce table locking on updates.
Also I'd try to keep your queries as simple as possible, even if you have to run more of them. This can increase chances of query cache hits, as well as reduce the over all workload on the backend, offloading some of the querying matching and work to the frontends (where you have more resources). It will also reduce the time a row lock is held therefore decreasing contention.
The proposed Gearman solution is probably the right tool for this job. As it will allow you to offload batch processing from mysql back to the cluster transparently.
You could set up sharding with a mysql on each machine but the set up time, maintenance and the changes to database access layer might be a lot of work compared to a gearman solution. You might also want to look at the experimental spider engine that could allow you to use multiple mysqls in unison.
Unless your calculation is very complex, most of the time will be spent retrieving data from MySql and sending the results back to MySQl.
As you have a single database no amount of parallelism or clustering on the application side will make much difference.
So your best options would be to do the update in pure SQL if that is at all possible, or, use a stored procedure so that all processing can take place within the MySql server and no data movement is required.
If this is not fast enough then you will need to split your database among several instances of MySql and come up with some schema to partition the data based on some application key.