Trie saves space, but how? - java

I am confused as to how the Trie implementation saves space & stores data in most compact form!
If you look at the tree below. When you store a character at any node, you also need to store a reference to that & thus for each character of the string you need to store its reference.
Ok we saved some space when a common character arrived but we lost more space in storing a reference to that character node.
So isn't there a lot of structural overhead to maintain this tree itself ? Instead if a TreeMap was used in place of this, lets say to implement a dictionary, this could have saved a lot more space as string would be kept in one piece hence no space wasted in storing references, isn't it ?

To save space when using a trie, one can use a compressed trie (also known as a patricia trie or radix tree), for which one node can represent multiple characters:
In computer science, a radix tree (also patricia trie or radix trie)
is a space-optimized trie data structure where each node with only one
child is merged with its child. The result is that every internal node
has at least two children. Unlike in regular tries, edges can be
labeled with sequences of characters as well as single characters.
This makes them much more efficient for small sets (especially if the
strings are long) and for sets of strings that share long prefixes.
Example of a radix tree:
Note that a trie is usually used as an efficient data structure for prefix matching on a set of strings. A trie can also be used as an associative array (like a hash table) where the key is a string.

Space is saved when you've lots of words to be represented by the tree. Because many words share the same path in the tree; the more words you've, more space you would save.
But there is a better data structure if you want to save space. Trie doesn't save space as much as directed acyclic word graph (DAWG) does, because it shares common node throughout the structure, whereas trie doesn't share nodes. The wiki entry explains this much detail, so have a look at it.
Here is the difference (graphically) between Trie and DAWG:
The strings "tap", "taps", "top", and "tops" stored in a Trie (left) and a DAWG (right), EOW stands for End-of-word.
The tree on the left side is Trie, and the tree on the right is DAWG. Compare them and see how DAWG saves space effciently. Trie has duplicate nodes that represent same letter/subword, while DAWG has exactly one node for each letter/subword.

It's not about cheap space in memory, it's about precious space in a file or on a communications link. With an algorithm that builds that trie, we can send 'ten' in three bits, left-right-right. Compared to the 24 bits 'ten' would take up uncompressed, that's a huge savings of valuable disk space or transfer bandwidth.

You might deduce that it save space is on a ideal machine where every byte is allocated efficiently. However real machines allocate aligned blocks of memory (8 bytes on Java and 16 bytes on some C++) and so it may not save any space.
Java Strings and collections add relatively high amount of over head so the percentage difference can be very small.
Unless your structure is very large the value of your time out weights the memory cost that using the simplest, most standard and easiest to maintain collection is far more important. e.g. your time can very easily be worth 1000x or more the value of the memory you are try to save.
e.g. say you have 10000 names which you can save 16 bytes each by using a trie. (Assuming this can be proven without taking more time) This equates to 16 KB, which at today's prices is worth 0.1 cents. If your time costs your company $30 per hour, the cost of writing one line of tested code might be $1.
If you have think about it a blink of an eye longer to save 16 KB, its unlikely to be worth it for a PC. (mobile devices are a different story but the same argument applies IMHO)
EDIT: You have inspired me to add an update http://vanillajava.blogspot.com/2011/11/ever-decreasing-cost-of-main-memory.html

Guava may indeed store the key at each level but the point to realize is that the key does not really need to be stored because the path to the node completely defines the key for that node. All that actually needs to be stored at each node is a single boolean indicating whether this is a leaf node or not.
Tries, like any other structure, excel at storing certain types of data. Specifically, tries are best at storing strings that share a common root. Think of storing full-path directory listings for example.

Related

Most efficient way to concatenate Strings

I was under the impression that StringBuffer is the fastest way to concatenate strings, but I saw this Stack Overflow post saying that concat() is the fastest method. I tried the 2 given examples in Java 1.5, 1.6 and 1.7 but I never got the results they did. My results are almost identical to this
Can somebody explain what I don't understand here? What is truly the fastest way to concatenate strings in Java?
Is there a different answer when one seeks the fastest way to concatenate two strings and when concatenating multiple strings?
String.concat is faster than the + operator if you are concatenating two strings... Although this can be fixed at any time and may even have been fixed in java 8 as far as I know.
The thing you missed in the first post you referenced is that the author is concatenating exactly two strings, and the fast methods are the ones where the size of the new character array is calculated in advance as str1.length() + str2.length(), so the underlying character array only needs to be allocated once.
Using StringBuilder() without specifying the final size, which is also how + works internally, will often need to do more allocations and copying of the underlying array.
If you need to concatenate a bunch of strings together, then you should use a StringBuilder. If it's practical, then precompute the final size so that the underlying array only needs to be allocated once.
What I understood from others answer is following:
If you need thread safety, use StringBuffer
If you do not need thread safety:
If strings are known before hand and for some reasons multiple time same code needs to be run, use '+' as compiler will optimize and handle it during compile time itself.
if only two strings need to be concatenated, use concat() as it will not require StringBuilder/StringBuffer objects to be created. Credits to #nickb
If multiple strings need to be concatenated, use StringBuilder.
Joining very long lists os strings by naively addinging them from start to end is very slow: the padded buffer grows incrementally, and is reallocated again and again, making additional copies (and sollicitating a lot the garbage collector).
The most efficient way to join long lists is to always start by joining pairs of adjascent strings whose total length is the smallest from ALL other candidate pairs; however this would require a complex lookup to find the optimal pair (similar to the wellknown problem of Hanoi towers), and finding it only to reduce the numebr of copies to the strict minimum would slow down things.
What you need a smart algorithm using a "divide and conquer" recursive algorithm with a good heuristic which is very near from this optimum:
If you have no string to join, return the empty string.
If you have only 1 string to join, just return it.
Otherwise if you have only 2 strings to join, join them and return the result.
Compute the total length of the final result.
Then determine the number of strings to join from the left until it reaches half of this total to determine the "divide" point splitting the set of strings in two non-empty parts (each part must contain at least 1 string, the division point cannot be the 1st or last string from te set to join).
Join the smallest part if it has at least 2 strings to join, otherwise join the other part (using this algorithm recursively).
Loop back to the beginning (1.) to complete the other joins.
Note that empty strings in the collection have to be ignored as if they were not part of the set.
Many default implementations of String.join(table of string, optional separator) found in various libraries are slow as they are using naive incremental joinining from left to right; the divide-and-conquer algorithm above will outperform it, when you need to join MANY small string to generate a very large string.
Such situation is not exceptional, it occurs in text preprocessors and generators, or in HTML processing (e.g. in "Element.getInnerText()" when the element is a large document containing many text elements separated or contained by many named elements).
The strategy above works when the source strings are all (or almost all to be garbage collected to keep only the final result. If th result is kept together as long as the list of source strings, the best alternative is to allocate the final large buffer of the result only once for its total length, then copy source strings from left to right.
In both cases, this requires a first pass on all strings to compute their total length.
If you usse a reallocatable "string buffer", this does not work well if the "string buffer" reallocates constantly. However, the string buffer may be useful when performing the first pass, to prejoin some short strings that can fit in it, with a reasonnable (medium) size (e.g. 4KB for one page of memory): once it is full, replace the subset of strings by the content of the string buffer, and allocate a new one.
This can considerably reduce the number of small strings in the source set, and after the first pass, you have the total length for the final buffer to allocate for the result, where you'll copy incrementally all the remaining medium-size strings collected in the first pass This works very well when the list of source strings come from a parser function or generator, where the total length is not fully known before the end of parsing/generation: you'll use only intermediate stringbuffers with medium size, and finally you'll generate the final buffer without reparsing again the input (to get many incremental fragments) or without calling the generator repeatedly (this would be slow or would not work for some generators or if the input of the parser is consumed and not recoverable from the start).
Note that this remarks also applies not just to joinind strings, but also to file I/O: writing the file incrementally also suffers from reallocation or fragmentation: you should be able to precompute the total final length of the generated file. Otherwise you need a classig buffer (implemented in most file I/O libraries, and usually sized in memory at about one memory page of 4KB, but you should allocate more because file I/O is considerably slower, and fragmentation becomes a performance problem for later file accesses when file fragments are allocated incrementalyy by too small units of just one "cluster"; using a buffer at about 1MB will avoid most pperformance problems caused by fragmented allocation on the file system as fragments will be considerably larger; a filesystem like NTFS is optimized to support fragments up to 64MB, above which fragmentation is no longer a noticeable problem; the same is true for Unix/Linux filesystems, which rend to defragment only up to a maximum fragment size, and can efficiently handle allocation of small fragments using "pools" of free clusters organized by mimum size of 1 cluster, 2 cluster, 4 clusters, 8 clusters... in powers of two, so that defragmenting these pools is straightforward and not very costly, and can be done asynchornously in the background when there's lo level of I/O activity).
An in all modern OSes, memory management is correlated with disk storage management, using memory mapped files for handling caches: the memory is backed by a storage, managed by the virtual memory manager (which means that you can allocate more dynamic memory than you have physical RAM, the rest will be paged out to the disk when needed): the straegy you use for managing RAM for very large buffers tends to be correlated to the performance of I/O for paging: using a memory mapped file is a good solution, and everything that worked with file I/O can be done now in a very large (virtual) memory.

Space differences in implementations of a trie

The more I read about tries the more confused I get for some reason.
What confuses me now is the following:
I have read about 2 types of implementation.
Using arrays to represent the characters (not storing the characters
itself) and in each node also store the index to the actual word (if
we reached a word).
Using a Collection of nodes that store characters and at the end
of each node use a boolean to determine if we reached a word going
down this path
In the first case it is not mentioned but it seems that we must actually keep all the dictionary words (since we indirectly reference them). So we have the array_size*numberOfNodes*lengthOfword + size of dictionary processed
In the latter case we don't need the dictionary since the chars are store directly in the tree. So it seems to me that the second implementation is more space efficient. But I am not sure by how much.
Is my understanding correct on the implementations and is there specific reasons to choose one over the other? Also how could we calculate the space requirements for the second case?
Tries do no store the original words anywhere and instead store them implicitly. The basic structure of a trie is the following: each node in the trie stores
A single bit determining whether or not the path that arrives at the node forms a word, and
A collection of pointers to child nodes labeled by characters.
To determine whether a word is in the trie, you start at the root, then follow the appropritately-labeled pointers one at a time. If you arrive at a node marked as a word, then the word exists in the trie. If you arrive at a node that isn't marked or you fall off the trie, the word is not present.
The difference between the two structures you have listed above is how the child pointers are stored. In the first version, the child pointers are stored as an array of one pointer per symbol in the alphabet, which makes following child pointers extremely fast but can be extremely space-inefficient. In the second version, you explicitly store some type of collection holding just the labeled pointers you need. This is slower, but is more space efficient for sparse tries.
The space usage of a trie depends on the number of nodes (call it n), size of the alphabet (call it k), and the way in which child pointers are represented. If you store a fixed-sized array of pointers, then the space usage is about kn pointers (n nodes with k pointers each), plus n bits for the markers at each node. If you have, say, a dynamic array of pointers stored in sorted order, the overhead will be n total child pointers, plus n bits, plus n times the amount of space necessary to store a single collection.
The advantage of the first approach is speed and simplicity, with very good performance on dense tries. The second is slower but more memory efficient for sparse tries.
These are not the only space optimizations possible. Patricia tries compress nodes with just one child together and are very space-efficient. DAWGs try to merge as many nodes as possible together, but do not support efficient insertions.
Hope this helps!

How can we refer to a specific line of a file?

I was reading about algorithmic problem and one was the following:
Having a file with millions of lines of data, there are 2 lines which
are identical. The lines are so long that may not fit in memory. Find
the 2 identical lines.
The solution suggested was to read lines in parts and create hashes for each line.
E.g. you build the hash for line 1 by building the hash of part-1 of line 1 (which can be read in memory) and then hash of part-2 of line 1 up to part-N of line 1.
Store the hashes in file or hashtable. For any same hash values compare the lines. If the lines are the same we solved it.
Although I understand this solution in high level, I have no idea how this could be implemented. How can we associate a hash with a specific line in file? Is this language implementation detail?
E.g. in Java how would we address this?
The real answer is buy more memory. The longest string you can have in Java 2 GB and that will fit in machines these days. You can buy 32 GB for less than $200.
But to solve the problem, I suggest you
find the offset of each line.
find the lines which are the same length (using the difference of offset)
calculate 64-bit or longer hashes of the lines with the same length.
for the lines with the same hash, do a byte-by-byte comparison.
Note: if you don't have enough memory to cache the entire file this will take a very long time. If you have a 32 GB machine and it has a 64 GB file, each pass will take about 20 minutes, and this has multiple passes.
1)Which API to find the offset?
You count the number of bytes you have read, and that is the offset.
2)The real answer is buy more memory Project Managers don't agree on this for real products. Do you have different experience?
I point out to them that I could spend a day which could cost them > $1000 (even if that is not what I get paid) saving $100 of reusable memory if they think that is good use of resources. I let them decide ;)
My 8 year old son has 8 GB in a PC he built as the memory cost me £24. Yet you are right that there are project mangers who think 8 GB is too much for a professional which is costing them that much per hour!? I have 16 GB in PC which I don't use to run anything serious because I do my work on machine with 256 GB. You can buy machines with 2 TB these days which is overkill for most applications. ;)
While i agree the solution is to utilize modern techniques, and leverage how cheap memory is these days, the problem is one meant to exercise the mind and understand how to solve the problem under the given constraints.
The hashing you talked about is rather simple.
The java solution can leverage a few things under the hood which may obscure whats actually going on so i will explain the solution first, and the java implementation second.
Generic Solution:
Hashing, such as SHA1, MD5, etc. generate an integer by compressing the input. Lets say you can only store the first MB of characters in each line.
You would iterate over each line, get the first MB of characters, and pass that into the Hashing algorithm(MD5 for example).
You then map the hash as the key, and a list/array of line numbers as the value.
After the first pass, any lines with a matching first MB of characters will end up with the same hash, and thus in the same list in the map.
To prepare for the second pass, you search the map and cull any lists that contain only one line number.
Then you create a list of line numbers by compiling the line numbers from the remaining entries in the map, these lines will be the only ones checked in the second pass.
Second pass, you pull the Second MB of characters from each line in your line list, hash them and put them in the map in the same fashion as pass one.
Iterate over the entries in the map, culling hash entries that only have one line number.
repeat step two but by incrementing the character block(MB) to coincide with the pass number.
when you reach a pass where you only have one hash with multiple line numbers, and that hash only has two elements, those lines are the two that are the same.
This is essentially a tree search.
Java Method:
Java has a class called HashMap, which automatically hashes the key. by using a
HashMap<String,ArrayList<Integer>>
for your master map, all you have to do each call
map.get(mbBlock).add(lineNumber); of course you should check to see if this is the first time this key has been used so you dont get a null pointer exception.
after each pass, cull the entries containing only one line.
reiterate over the remaining lines until you only have two line numbers left
Get the first k characters of each line, where k is configurable. Do your hash to find several groups of lines that could have identical lines.
Based on the result of the first step, in which you great narrow down the search range, run your algorithm on each smaller groups for the next k characters.
The search range is narrowed down dramatically after each round if not in the worst case.
The trick of algorithm is that breaking big problems into small ones and make full use of the results of previous steps.

What is the general purpose of using hashtables as a collection? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What exactly are hashtables?
I understand the purpose of using hash functions to securely store passwords. I have used arrays and arraylists for class projects for sorting and searching data. What I am having trouble understanding is the practical value of hashtables for something like sorting and searching.
I got a lecture on hashtables but we never had to use them in school, so it hasn't clicked. Can someone give me a practical example of a task a hashtable is useful for that couldn't be done with a numerical array or arraylist? Also, a very simple low level example of a hash function would be helpful.
There are all sorts of collections out there. Collections are used for storing and retrieving things, so one of the most important properties of a collection is how fast these operations are. To estimate "fastness" people in computer science use big-O notation which sort of means how many individual operations you have to accomplish to invoke a certain method (be it get or set for example). So for example to get an element of an ArrayList by an index you need exactly 1 operation, this is O(1), if you have a LinkedList of length n and you need to get something from the middle, you'll have to traverse from the start of the list to the middle, taking n/2 operations, in this case get has complexity of O(n). The same comes to key-value stores as hastable. There are implementations that give you complexity of O(log n) to get a value by its key whereas hastable copes in O(1). Basically it means that getting a value from hashtable by its key is really cheap.
Basically, hashtables have similar performance characteristics (cheap lookup, cheap appending (for arrays - hashtables are unordered, adding to them is cheap partly because of this) as arrays with numerical indices, but are much more flexible in terms of what the key may be. Given a continuous chunck of memory and a fixed size per item, you can get the adress of the nth item very easily and cheaply. That's thanks to the indices being integers - you can't do that with, say, strings. At least not directly. Hashes allows reducing any object (that implements it) to a number and you're back to arrays. You still need to add checks for hash collisions and resolve them (which incurs mostly a memory overhead, since you need to store the original value), but with a halfway decent implementation, this is not much of an issue.
So you can now associate any (hashable) object with any (really any) value. This has countless uses (although I have to admit, I can't think of one that's applyable to sorting or searching). You can build caches with small overhead (because checking if the cache can help in a given case is O(1)), implement a relatively performant object system (several dynamic languages do this), you can go through a list of (id, value) pairs and accumulate the values for identical ids in any way you like, and many other things.
Very simple. Hashtables are often called "associated arrays." Arrays allow access your data by index. Hash tables allow access your data by any other identifier, e.g. name. For example
one is associated with 1
two is associated with 2
So, when you got word "one" you can find its value 1 using hastable where key is one and value is 1. Array allows only opposite mapping.
For n data elements:
Hashtables allows O(k) (usually dependent only on the hashing function) searches. This is better than O(log n) for binary searches (which follow an n log n sorting, if data is not sorted you are worse off)
However, on the flip side, the hashtables tend to take roughly 3n amount of space.

Memory conscious string filtering

Lets say I have 500 words:
Martin
Hopa
Dunam
Golap
Hugnog
Foo
... + 494 more words
I have following text that is about 85KB in total:
Marting went and got him self stuff
from Hopa store and now he is looking
to put it into storage with his best
friend Dunam. They are planing on
using Golap lock that they found in
Hugnog shop in Foo town. >... text continues into several pages
I would like to produce following text:
------- went and got him self stuff
from ---- store and now he is looking
to put it into storage with his best
friend ----. They are planing on
using ---- lock that they found in
------ shop in --- town. >... text continues into several pages
Currently I'm using commons method:
String[] 500words = //all 500 words
String[] maskFor500words = // generated mask for each word
String filteredText = StringUtils.replaceEach(textToBeFiltered, 500words , maskFor500words);
Is there a another way to do this that could be more efficient when it comes to memory and CPU usage?
What is the best storage for the 500 words? File, List, enum, array ...?
How would you get statistics, such as how many and what words were replaced; and for each word how many times it was replaced.
I wouldn't care much apout CPU and memory usage. It should be relatively small for such a problem and such a volume of text.
What I would do is
have a Map containing all the strings as keys, with the numer of times they have been found in the text (initially 0)
read the text word by word, by using a StringTokenizer, or the String.split() method
for each word, find if the map contains it (O(1) operation, very quick)
if it contains it, add "----" to a StringBuilder, and increment the value stored for the word in the map
else add the word itself (with a space before unless it's the first word of the text)
A the end of the process, the StringBuilder contains the result, and the map contains the numer of times each word has been used as a replacement.
Make sure to initialize the STringBuilder with the length of the original text, in order to avoid too many reallocations.
Should be simple and efficient.
I wouldn't care about memory much, but in case you do: trie is your friend. It's memory efficient for large sets and it allows very efficient matching. You may want to implement it in a compressed fashion.
If I understand the problem correctly, you need to read the 85KB of text and parse out every word (use split or StringTokenizer). For every word, you need to know if you have it in the set of 500words, and if so, switch it with the corresponding mask.
If you know you have about 500 words, I'd suggest store the 500 words and their masks in a HashMap with initial capacity of about 650 (JDK doc says hashing is most efficient with a load factor of 0.75). Put in the word-mask pairs in the HashMap with a for loop.
The biggest bang for the buck (HashMap) you get is that the get/put operations (searching for the key) are done in constant time, which is better than O(n) in array and even O(log(n)) if you do binary search on sorted array.
Armed with the HashMap, you can build up a SringBuffer while filtering those 85KB of text.
Return the String.toString() from your method and you are done! Regards, - M.S.
PS If you are building the map at a server and doing the filtering somewhere else (at a client) and need to transport the Dictionary, HashMap won't do - it cannot be serialized. Use a Hashtable in that case. If on the same machine, HashMap is more memory efficient. Later, - M.S.

Categories

Resources