I have an Android application that iterates through an array of thousands of integers and it uses them as key values to access pairs of integers (let us call them id's) in order to make calculations with them. It needs to do it as fast as possible and in the end, it returns a result which is crucial to the application.
I tried loading a HashMap into the memory for fast access to those numbers but it resulted in OOM Exception. I also tried writing those id's to a RandomAccessFile and storing their offsets on the file to another HashMap but it was way too slow. Also, the new HashMap that only stores the offsets is still occupying a large memory.
Now I am considering SQLite but I am not sure if it will be any faster. Are there any structures or libraries that could help me with that?
EDIT: Number of keys are more than 20 million whereas I only need to access thousands of them. I do not know which ones I will access beforehand because it changes with user input.
You could use Trove's TIntLongHashMap to map primitive ints to primitive longs (which store the ints of your value pair). This saves you the object overhead of a plain vanilla Map, which forces you to use wrapper types.
EDIT
Since your update states you have more than 20 million mappings, there will likely be more space-efficient structures than a hash map. An approach to partition your keys into buckets, combined with some sub-key compression will likely save you half the memory over even the most efficient hash map implementation.
SQLite is an embedded relational database, which uses indexes. I would bet it is much faster than using RandomAccessFile. You can give it a try.
My suggestion is to rearrange the keys in Buckets - what i mean is identify (more or less) the distribution of your keys, then make files that corresponds to each range of keys (the point is that every file must contain just as much integers that can get in memory and no more then that) then when you search for a key, you just read the whole file to the memory and look for it.
exemple, assuming the distribution of the key is uniform, store 500k values corresponding to the 0-500k key values, 500k values corresponding to 500k-1mil keys and so on...
EDIT : if you did try this approach, and it still went slow, i still have some tricks in my sleaves:
First make sure that your division is actually close to equal between all the buckets.
Try to make the buckets smaller, by making more buckets.
The idea about correct division to buckets by ranges is that when you search for a key, you go to the corresponding range bucket and The key either in it or that it is not in the whole collection. so there is no point on Concurnetly reading another bucket.
I never done that, cause im not sure concurrency works on I\O's, but it may be helpfull to Read the whole file with 2 threads one from top to bottom and the other from bottom to top until they meet. (or something like that)
While you read the whole bucket into memory, split it to 3-4 arraylists, Run 3-4 working threads to search your key on each of the arrays, the search must end way faster then.
Related
As I have studied HashSet class, it uses the concept of filled ratio, which says if the HashSet if filled up to this limit create a larger HashSet and copy values in to it. Why we dont let HashSet to get full with object and then create a new HashSet? Why a new concept is derived for HashSet?
Both ArrayList and Vector are accessed by positional index, so that there are no conflicts and access is always O(1).
A hash-based data structure is accessed by a hashed value, which can collide and degrade into access to a second-level "overflow" data structure (list or tree). If you have no such collisions, access is O(1), but if you have many collisions, it can be significantly worse. You can control this a bit by allocating more memory (so that there are more buckets and hopefully fewer collisions).
As a result, there is no need to grow an ArrayList to a capacity more than you need to store all elements, but it does make sense to "waste" a bit (or a lot) in the case of a HashSet. The parameter is exposed to allow the programmer to choose what works best for her application.
As Jonny Henly has described. It is because of the way data is stored.
ArrayList is linear data structure, while HashSet is not. In HashSet data is stored in underlying array based on hashcodes. In a way performance of HashSet is linked to how many buckets are filled and how well data is distributed among these buckets. Once this distribution of data is beyond a certain level (called load factor) re-hashing is done.
HashSet is primarily used to ensure that the basic operations (such as adding, fetching, modifying and deleting) are performed in constant time regardless of the number of entries being stored in the HashSet.
Though a well designed hash function can achieve this, designing one might take time. So if performance is a critical requirement for the application, we could use the load factor to ensure the operations are performed in constant time as well. I think we could call both of these as redundant's for each other (the load factor and the hash function).
I agree that this may not be a perfect explanation, but I hope it does bring some clarity on the subject.
I'm writing a small system in Java in which i extract n-gram feature from text files and later need to perform Feature Selection process in order to select the most discriminators features.
The Feature Extraction process for a single file return a Map which contains for each unique feature, its occurrences in the file. I merge all the file's Maps (Map) into one Map that contain the Document Frequency (DF) of all unique features extracted from all the files. The unified Map can contain above 10,000,000 entries.
Currently the Feature Extraction process is working great and i want to perform Feature Selection in which i need to implement Information Gain or Gain Ratio. I will have to sort the Map first, perform computations and save the results in order to finally get a list of (for each feature, its Feature Selection score)
My question is:
What is the best practice and the best data structure to hold this large amount of data (~10M) and perform computations?
This is a very broad question, so the answer is going to broad too. The solution depends on (at least) these three things:
The size of your entries
Storing 10,000,000 integers will require about 40MiB of memory, while storing 10,000,000 x 1KiB records will require more than 9GiB. These are two different problems. Ten million integers are trivial to store in memory in any stock Java collection, while keeping 9GiB in memory will force you to tweak and tune the Java Heap and garbage collector. If the entries are even larger, say 1MiB, then you can forget about in-memory storage entirely. Instead, you'll need to focus on finding a good disk backed data structure, maybe a database.
The hardware you're using
Storing ten million 1KiB records on a machine with 8 GiB of ram is not the same as storing them on a server with 128GiB. Things that are pretty much impossible with the former machine are trivial with the latter.
The type of computation(s) you want to do
You've mentioned sorting, so things like TreeMap or maybe PriorityQueue come to mind. But is that the most intensive computation? And what is the key you're using to sort them? Do you plan on locating (getting) entities based on other properties that aren't the key? If so, that requires separate planning. Otherwise you'd need to iterate over all ten million entries.
Do your computations run in a single thread or multiple threads? If you might have concurrent modifications of your data, that requires a separate solution. Data structures such as TreeMap and PriorityQueue would have to be either locked or replaced with concurrent structures such as ConcurrentLinkedHashMap or ConcurrentSkipListMap.
You can use a caching system, check MapDB it's very efficient and has a tree map implementation (so you can have your data ordered without any effort). Also, it provides data stores to save your data to disk when it cannot be held on memory.
// here a sample that uses the off-heap memory to back the map
Map<String, String> map = DBMaker.newMemoryDirectDB().make().getTreeMap("words");
//put some stuff into map
map.put("aa", "bb");
map.put("cc", "dd");
My intuition is that you could take inspiration from the initial MapReduce paradigm and partition your problem into several smaller but similar ones and then aggregate these partial results in order to reach the complete solution.
If you solve one smaller problem instance at a time (i.e. file chunk) this will guarantee you a space consumption penalty bounded by the space requirements for this single instance.
This approach to process the file lazily will work invariant of the data structure you choose.
Problem
I'm trying to normalize columns in very large raw, de-normalized CSV tables. Column values are short strings (10-100 bytes). I'm trying to find a faster solution than my current approach(es).
Example
input.csv
john,london
jean,paris
bill,london
Is converted to the following files:
input.normalized.csv
1,1
2,2
3,1
input.col1.csv
1,john
2,jean
3,bill
input.col2.csv
1,london
2,paris
I've currently got two approaches to normalizing these datasets.
Current Approaches
Single pass in-memory
A single pass approach, storing column values -> normalized_id values in an associative array (a Java HashMap in my case). This will run out of memory at some point, but it's fast when it can store everything in memory. A simple way of lowering memory usage, would be to do a single pass per column.
Multipass sorting
A multipass approach based on sorting. Column values gets their line number attached, and are then sorted (in a memory-efficient merge-sort manner). For examples, column values london,paris,london have line numbers attached and are then sorted: london;1,london;3,paris;2 .
I can now have a single "unique value counter", and simply compare each value with the previous value (e.g. London == London, so do not increment unique value counter). At the end, I have pairs of unique_id,linenum pairs that I can sort by line number to reconstruct the normalized column. Columns can then be merged in a single pass.
This approach can be done in very limited memory, depending on the memory usage of the sorting algorithm applied. The good news is that this approach is easy to implement in something like hadoop, utilising its distributed sorting step.
MY QUESTION
The multipass approach is painfully slow compared to a single-pass approach (or a single-pass-per-column approach). So I'm wondering what the best way to optimize that approach would be, or if someone could suggest alternative approaches?
I reckon I'm looking for a (distributed) key-value store of some kind, that has as low memory usage as possible.
It seems to me that using Trove would be a good, simple alternative to using Java HashMaps, but I'd like something that can handle the distribution of keys for me.
Redis would probably be a good bet, but I'm not impressed by it's memory usage per key-value pair.
Do you know the rough order of magnitude of the input columns? If so, and you don't need to preserve the original input file order? Then you can just use a sufficiently large hash function to avoid collisions for the input keys.
If you insist on having a dense consecutive key space, then you've already covered the two primary choices. You could certainly try redis, I've seen it used for 10s of millions of key-value pairs, but it is probably not going to scale beyond that. You could also try memcached. It might have a slightly lower memory overhead than redis, but I would definitely experiment with both, since they are fairly similar for this particular usage. You don't actually need Redis's advanced data structures.
If you need more key-values than you can store in memory on a single machine, you could fall back to something like BDB or Kyoto cabinet, but eventually this step is going to become the bottleneck to your processing. The other red flag is if you can fit an entire column in memory on a single machine, then why are you using Hadoop?
Honestly, relying on a dense ordered primary key is one of the first things that gets thrown out in a NoSQL DB as it assumes a single coordinated master. If you can allow for even some gaps, then you can do something similar to a vector clock.
One final alternative, would be to use a map-reduce job to collect all the duplicate values up by key and then assign a unique value using some external transactional DB counter. However, the map-reduce job is essentially a multi-pass approach, so it may be worse. The main advantage is that you will be getting some IO parallelism. (Although the id assignment is still a serial transaction.)
I'm looking to implement a B-tree (in Java) for a "one use" index where a few million keys are inserted, and queries are then made a handful of times for each key. The keys are <= 40 byte ascii strings, and the associated data always takes up 6 bytes. The B-tree structure has been chosen because my memory budget does not allow me to keep the entire temporary index in memory.
My issue is about the practical details in choosing a branching factor and storing nodes on disk. It seems to me that there are two approaches:
One node always fit within one block. Achieved by choosing a branching factor k so that even for the worst case key-length the storage requirement for keys, data and control structures are <= the system block size. k is likely to be low, and nodes will in most cases have a lot of empty room.
One node can be stored on multiple blocks. Branching factor is chosen independent of key size. Loading a single node may require that multiple blocks are loaded.
The questions are then:
Is the second approach what is usually used for variable-length keys? or is there some completely different approach I have missed?
Given my use case, would you recommend a different overall solution?
I should in closing mention that I'm aware of the jdbm3 project, and is considering using it. Will attempt to implement my own in any case, both as a learning exercise and to see if case specific optimization can yield better performance.
Edit: Reading about SB-Trees at the moment:
S(b)-Trees
Algorithms and Data Structures for External Memory
I'm missing option C here:
At least two tuples always fit into one block, the block size is chosen accordingly. Blocks are filled up with as many key/value pairs as possible, which means the branching factor is variable. If the blocksize is much greater than average size of a (key, value) tuple, the wasted space would be very low. Since the optimal IO size for discs is usually 4k or greater and you have a maximum tuple size of 46, this is automatically true in your case.
And for all options you have some variants: B* or B+ Trees (see Wikipedia).
JDBM BTree is already self balancing. It also have defragmentation which is very fast and solves all problems described above.
One node can be stored on multiple blocks. Branching factor is chosen independent of key size. Loading a single node may require that multiple blocks are loaded.
Not necessary. JDBM3 uses mapped memory, so it never reads full block from disk to memory. It creates 'a view' on top of block and only read partial data as actually needed. So instead of reading full 4KB block, it may read just 2x128 bytes. This depends on underlying OS block size.
Is the second approach what is usually used for variable-length keys? or is there some completely different approach I have missed?
I think you missed point that increasing disk size decreases performance, as more data have to be read. And single tree can have share both approaches (newly inserted nodes first, second after defragmentation).
Anyway, flat-file with mapped memory buffer is probably best for your problem. Since you have fixed record size and just a few million records.
Also have look at leveldb. It has new java port which almost beats JDBM:
https://github.com/dain/leveldb
http://code.google.com/p/leveldb/
You could avoid this hassle if you use some embedded database. Those have solved these problems and some more for you already.
You also write: "a few million keys" ... "[max] 40 byte ascii strings" and "6 bytes [associated data]". This does not count up right. One gig of RAM would allow you more then "a few million" entries.
I'm processing some generated data files (hundreds of Mbytes) which contains several G objects. I need to random access these objects. A possible implementation, I guess, might be a big HashTable. My program is written in Java and it seems the java.util.HashMap cannot handle this (somehow it's extremely slow). Could anyone recommend a solution to random accessing these objects?
If a HashMap is extremely slow, then the two most likely causes are as follows:
The hashCode() and/or equals(Object) methods on your key class could be very expensive. For instance, if you use an array or a collection as a key, the hashCode() method will access every element each time you call it, and the equals method will do the same for equal keys.
Your key class could have a poor hashCode() method that is giving the same value for a significant percentage of the (distinct) keys used by the program. When this occurs you get many key collisions, and that can be really bad for performance when the hash table gets large.
I suggest you look at these possibilities first ... before changing your data structure.
Note: if "several G objects" means several billion objects, then you'll have trouble holding the files' contents in memory ... unless you are running this application on a machine with 100's of gigabytes of RAM. I advise you do some "back of the envelope" calculations to see if what you are trying to do is feasible.
Whatever your keys are, make sure you're generating a good hash for each one via hashCode(). A lot of times bad HashMap performance can be blamed on colliding hashes. When there's a collision, HashMap generates a linked list for the colliding objects.
Worst-case if you're returning the same hash for all objects, HashMap essentially becomes a linked list. Here's a good starting place for writing hash functions: http://www.javamex.com/tutorials/collections/hash_function_guidelines.shtml
A few hundred MB cannot hold several billion objects unless each object is a bit (which is not really an object IMHO).
How I would approach this is to use memory mapped file to map in the contents of the data and to build your own hash table in another memory mapped file (which requires you to scan the data once to build the keys)
Depending on the layout of the data, it is worth remembering that random access is not the most efficient way to cache data i.e. your cache loaded lines of 64 bytes (depending on architecture) and if your structure doesn't fit in memory, record based tables may be more efficient.