Kyoto Cabinet / Berkeley DB : Hash table size limitations - java

I am having a hard time storing hundreds of millions of key/value pairs of 16/32bytes with a hash array on my SSD.
With Kyoto Cabinet: When it works fine, it inserts at 70000 record/s. Once it drops, it goes down to 10-500 records/s. With the default settings, the drop happens after around a million records. Looking at the documentation, that is the default number of buckets in the array, so it makes sense. I increased this number to 25 millions and indeed, it works fine until around 25 millions records. Problem is, as soon as I push the number of buckets to 30 millions or over, the insert rate is down to 10-500 records/s from the beginning. Kyoto Cabinet is not designed to increase the number of bucket after the database is created, so I cannot insert more than 25 millions records.
1/ Why would KC's insert rate get very low once the bucket number exceeds 25M ?
With Berkeley DB: The best speed I got is slightly lower than KC, closer to 50000 record/s, but still ok. With the default settings, just like KC, the speed drops suddenly after around a million records. I know BDB is designed to extend gradually its number of buckets. Regardless of that, It tried to increase the initial number, playing with HashNumElements and FillFactor, but any of these attempts made the situation worst. So I still cannot insert over 1-2 millions records with DBD. I tried activating non-synchronized transactions, tried different rates of checkpoints, increased caches. Nothing improves the drop down.
2/ What could cause BDB's insert rate to drop after 1-2 million inserts ?
Note: I'm working with java, and when the speed is dropping, the CPU usage lowers to 0-30% while at 100% when working at correct speed.
Note: Stopping the process and resuming the insertion changes nothing. So I don't think that is related to memory limits or garbage collection.
Thx.

Below is how I managed to store billions of records despite the writing limitations encountered with KC.
With much effort, I still haven't solved the problem for neither Kyoto Cabinet nor Berkeley DB. However I came up with an interesting workaround using Kyoto Cabinet.
I noticed I cannot write more than 25M records on one KC file, but read has no such limitation −it is always fast, regardless of the size of the database. The solution I found is to create a new KC file (a new database) for every 25M new records. That way the reading happens on many KC files and is still fast, and the writing happens only on the last created file and is fast as well. Only remaining problem was to allow update/deletion of the records on the previous files. For that, I copied the SSTables approach, which is :
All the 0 to N-1 files are read-only, file N is read+write.
Any insert/update/deletion is written in file N.
Reads look into files N to 0, and return the first-seen/last-written insertion/update/deletion.
A bloom filter is attached to each file to avoid accessing a file that doesn't have the wanted record.
As soon as file N reaches 25M records, it becomes read-only and file N+1 is created.
Notes :
Just like with SSTables, If a lot of updates/deletions are performed, we might want to perform compaction. However contrary to SSTables, compaction here doesn't require to rewrite the file. Outdated records are simply removed from the KC files, and if a KC file gets very small, it can be either removed −reinserting the records in file N− or reopenned for new insertions −provided the next files are compact.
A deletion does not delete the record, but write a special value that identifies the record as deleted. During compaction, deleted records are removed for real.
Checking if a record exists usually requires to look into the database. Thanks to the bloom filters, most of the negative answers can be given without any disk access.

Related

SQLite doing too many small size disk reads

Background I am using SQLite to store around 10M entries, where the size of each entry is around 1Kb. I am reading this data back in chunks of around 100K entries at a time, using multiple parallel threads. Read and writes are not going in parallel and all the writes are done before starting the reads.
Problem I am experiencing too many disk reads. Each second around 3k reads are happening and I am reading only 30Kb data in those 3k reads (Hence around 100 bytes per disk read). As the result, I am seeing a really horrible performance (It is taking around 30 minutes to read the data)
Question
Is there any SQlite settings/pragmas that I can use to avoid the small size disk reads?
Are there any best practices for batch parallel reads in SQlite?
Does SQlite read all the results of a query in one go? Or read the results in smaller chunks? If latter is the case, then where does it stone partial out of a query
Implementation Details My using SQlite with Java and my application is running on linux. JDBC library is https://github.com/xerial/sqlite-jdbc (Version 3.20.1).
P.S I am already built the necessary Indexes and verified that no table scans are going on (using Explain Query planner)
When you are searching for data with an index, the database first looks up the value in the index, and then goes to the corresponding table row to read all the other columns.
Unless the table rows happen to be stored in the same order as the values in the index, each such table read must go to a different page.
Indexes speed up searches only if the seach reduces the number of rows. If you're going to read all (or most of the) rows anyway, a table scan will be much faster.
Parallel reads will be more efficient only if the disk can actually handle the additional I/O. On rotating disks, the additional seeks will just make things worse.
(SQLite tries to avoid storing temporary results. Result rows are computed on the fly (as much as possible) while you're stepping through the cursor.)

How do I store Frequent data (Combination of Source-Destination) in Database efficiently to get top10 searches for past 30 Days

I am trying to write an algorithm which does insert of frequent data search.
Let's say User can search different combination of two entities (Source-Destination), Each time user search I want to store data with count, and if he search same combination(Source-Destination) I will update the count.
In this case if Users are 1000, and if User searches for 0 different combination(Source-Destination) and data will be stored for 30 Days.
So total number of rows will be 100000*30*30=13500000(1.3 Billion) Rows. (using Mysql)
Please suggest me If there is better way to write this.
GOAL: I want to get top 10 Searach Combination of users at any point of time.
1,000 users and 60,000 rows are nothing by today's standards. Don't even think about it, there is no performance concern whatsoever, so just focus on doing it properly instead of worrying about slowness. There will be no slowness.
The proper way of doing it is by creating a table in which each row contains the search terms, ([source,destination] in your case,) and a sum, and using a unique index on the [source, destination] pair of columns. Which is the same as making those two columns the primary key.
If you had 100,000,000 rows, and performance was critical, and you also had a huge budget affording you the luxury to do whatever weird thing it takes to make ends meet, then you would perhaps want to do something exotic, like appending each search to an indexless table (allowing the fastest appends possible) and then compute the sums in a nightly batch process. But with less than a million rows such an approach would be a complete overkill.
Edit:
Aha, so the real issue is the OP's need for a "sliding window". Well, in that case, I cannot see any approach other than saving every single search, along with the time that it happened, and in a batch process a) computing sums, and b) deleting entries that are older than the "window".

How to efficiently process strings in Java

I am facing some optimization problem in java. I have to process a table which has 5 attributes. The table contains about 5 millions records. To simplify the problem, let say I have to read every record one by one. Then I have to process each record. From each record I have to generate a mathematical lattice structure which has 500 nodes. In other words each record generate 500 more new records which can be referred as parents of the original record. So in total there are 500 X 5 Millions records including original plus parent records. Now the job is to find the number of distinct records out of all 500 X 5 Millions records with their frequencies. Currently I have solved this problem as follow. I convert every record to a string with value of each attribute separated by "-". And I count them in a java HashMap. Since these records involve intermediate processing. A record is converted to a string and then back to a record during intermediate steps. The code is tested and it is working fine and produce accurate results for small number of records but it can not process 500 X 5 Millions records.
For large number of records it produce the following error
java.lang.OutOfMemoryError: GC overhead limit exceeded
I understand that the number of distinct records are not more than 50 thousands for sure. Which means that the data should not cause memory or heap overflow. Can any one suggest any option. I will be very thankful.
Most likely, you have some data-structure somewhere which is keeping references to the processed records, also known as a "memory leak". It sounds like you intend to process each record in turn and then throw away all the intermediate data, but in fact the intermediate data is being kept around. the garbage collector can't throw away this data if you have some collection or something still pointing to it.
Note also that there is the very important java runtime parameter "-Xmx". Without any further detail than what you've provided, I would have thought that 50,000 records would fit easily into the default values, but maybe not. Try doubling -Xmx (hopefully your computer has enough RAM). If this solves the problem then great. If it just gets you twice as far before it fails, then you know it's an algorithm problem.
Using a sqlite database can used to safe (1.3tb?) data. With query´s you can find fast info back. Also the data get saved when youre program ends.
You probably need to adopt a different approach to calculating the frequencies of occurrence. Brute force is great when you only have a few million :)
For instance, after your calculation of the 'lattice structure' you could combine that with the original data and take either the MD5 or SHA1. This should be unique except when the data is not "distinct". Which then should reduce your total data down back below 5 million.

JDBC/Hibernate Fetch Size and memory issues

After investigating a bit at work I noticed that the application I'm working on is using the default fetch size (which is 10 for Oracle from what I know). The problem is that in the majority of cases the users fetch large amount of data (ranging from few thousand to even hundreds of thousands) and that the default 10 is really a huge bottleneck.
So the obvious conclusion here would be to make the fetch size larger. At first I was thinking about setting the default to 100 and bumping it to a 1000 for several queries. But then I read on the net that the default is so small to prevent memory issues (i.e. when the JVM heap cannot handle so much data), should I be worried about it?
I haven't seen anywhere further explanation to this. Does it mean that a bigger fetch sizes means more overhead while fetching the result set? Or do they just mean that with the default I can fetch 10 records and then GC them and fetch another 10 and so on (whereas lets say fetching a 10000 all at once would result in an OutOfMemory exception)? In such case I wouldn't really care as I need all the records in the memory anyway. In the former case (where bigger result set means bigger memory overhead) I guess I should load test it first.
By setting the fetch size too, big you are risking OutOfMemoryError.
The fact that you need all these records anyway is probably not justifiable. More chances you need the entities reflected by the returned ResultSets... Setting the fetch size to 10000 means you're heaping 10000 records represented by JDBC classes. Of course, you don't pass these around through your application. You first transform them into your favorite business-logic-entities and then hand them to your business-logic-executor. This way, The records form the first fetch bulk are available for GC as soon as JDBC fetches the next fetch bulk.
Typically, this transformation is done a little bunch at a time exactly because of the memory threat aforementioned.
One thing you're absolutely right, though: you should test for performance with well-defined requirements before tweaking.
So the obvious conclusion here would be to make the fetch size larger.
Perhaps an equally obvious conclusion should be: "Let's see if we can cut down on the number of objects that users bring back." When Google returns results, it does so in batches of 25 or 50 sorted by greatest likelihood to be considered useful by you. If your users are bringing back thousands of objects, perhaps you need to think about how to cut down on that. Can the database do more of the work? Are there other operations that could be written to eliminate some of those objects? Could the objects themselves be smarter?

efficient db operations

Here is the scenario I am researching a solution for at work. We have a table in postgres which stores events happening on network. Currently the way it works is, rows get inserted as network events come and at the same time older records which match the specific timestamp get deleted in order to keep table size limited to some 10,000 records. Basically, similar idea as log rotation. Network events come in burst of thousands at a time, hence rate of transaction is too high which causes performance degradation, after sometime either server just crashes or becomes very slow, on top of that, customer is asking to keep table size up to million records which is going to accelerate performance degradation (since we have to keep deleting record matching specific timestamp) and cause space management issue. We are using simple JDBC to read/write on table. Can tech community out there suggest better performing way to handle inserts and deletes in this table?
I think I would use partitioned tables, perhaps 10 x total desired size, inserting into the newest, and dropping the oldest partition.
http://www.postgresql.org/docs/9.0/static/ddl-partitioning.html
This makes load on "dropping oldest" much smaller than query and delete.
Update: I agree with nos' comment though, the inserts/deletes may not be your bottleneck. Maybe some investigation first.
Some things you could try -
Write to a log, have a separate batch proc. write to the table.
Keep the writes as they are, do the deletes periodically or at times of lower traffic.
Do the writes to a buffer/cache, have the actual db writes happen from the buffer.
A few general suggestions -
Since you're deleting based on timestamp, make sure the timestamp is indexed. You could also do this with a counter / auto-incremented rowId (e.g. delete where id< currentId -1000000).
Also, JDBC batch write is much faster than individual row writes (order of magnitude speedup, easily). Batch writing 100 rows at a time will help tremendously, if you can buffer the writes.

Categories

Resources