tl/dr: I need to keep some values in my app up to date with the values in ~10 small files, but I'm worried reading the value over and over will have a lot of GC overhead. Do I create a bunch of unbuffered file readers and poll them, or is there any way to "map" the values in a file into a java Double that I can re-run a moment later when the value (maybe) changed?
Long version: I've got some physical sensors (Gyroscope, tachometer) which ev3dev helpfully exposes their current values as small files in a virtual filesystem. Like one file called "/sys/bus/lego/drivers/ev3-analog-sensor/angle" that contains 56.26712
Or the next moment it contains 58.9834
And I'd like a value in my app to keep as close in sync with that file as possible. I could have your standard loop containing MappedByteBuffer buffer = inChannel.map(FileChannel.MapMode.READ_ONLY, 0, inChannel.size()); (from here) but that seems like a lot of allocation overhead if it put it in a fast loop.
Maybe something with a Scanner, or
FileChannel inChannel = aFile.getChannel();
ByteBuffer buffer = ByteBuffer.allocate(1024);
while(inChannel.read(buffer) > 0)...
I haven't found a magic function of KeepInSyncWithFile(myFloatArray, File("./angle", MODE.FILE_TO_VALUE, 10, TimeUnits.MS)
Java 8+
Singe you are talking about pseudofiles on /sys virtual filesystem, it's unlikely that the standard WatchService will work for them. In order to get updated values, you need to read these files.
The good news is that you can keep reading in a garbage-free manner, i.e. with no allocation at all. Open the file and allocate the buffer just once, and every time you want to read a value, seek to the beginning of the file and read to an existing preallocated buffer.
Here is the code:
public class DeviceReader implements Closeable {
private final RandomAccessFile file;
private final byte[] buf = new byte[512];
public DeviceReader(String fileName) throws IOException {
this.file = new RandomAccessFile(fileName, "r");
}
#Override
public void close() throws IOException {
file.close();
}
public synchronized double readDouble() throws IOException {
file.seek(0);
int length = file.read(buf);
if (length <= 0) {
throw new EOFException();
}
int sign = 1;
long exp = 0;
long value = 0;
for (int i = 0; i < length; i++) {
byte ch = buf[i];
if (ch == '-') {
sign = -1;
} else if (ch == '.') {
exp = 1;
} else if (ch >= '0' && ch <= '9') {
value = (value * 10) + (ch - '0');
exp *= 10;
} else if (ch < ' ') {
break;
}
}
return (double) (sign * value) / Math.max(1, exp);
}
}
Note that I manually parse a floating point number from a byte[] buffer. It would be much easier to call Double.parseDouble, but in this case you'd have to convert a byte[] to a String, and the algorithm will no longer be allocation free.
I can't vouch for this but, File Observer might be worth looking into. You can cache the latest values in your app and observe the file via FileObserver to find if any modify event occurs. I personally don't have any experience working with it, so I can't say for sure as to whether it would work with system files. But if it does, then it's a better solution when compared to just repeatedly looking up the file in a loop.
I'd like some feedback on a method I tried to implement that isn't working 100%. I'm making an Android app for practice where the user is given 20 random letters. The user then uses these letters to make a word of whatever size. It then checks a dictionary to see if it is a valid English word.
The part that's giving me trouble is with showing a "hint". If the user is stuck, I want to display the possible words that can be made. I initially thought recursion. However, with 20 letters this can take quite a long time to execute. So, I also implemented a binary search to check if the current recursion path is a a prefix to anything in the dictionary. I do get valid hints to be output however it's not returning all possible words. Do I have a mistake here in my recursion thinking? Also, is there a recommended, faster algorithm? I've seen a method in which you check each word in a dictionary and see if the characters can make each word. However, I'd like to know how effective my method is vs. that one.
private static void getAllWords(String letterPool, String currWord) {
//Add to possibleWords when valid word
if (letterPool.equals("")) {
//System.out.println("");
} else if(currWord.equals("")){
for (int i = 0; i < letterPool.length(); i++) {
String curr = letterPool.substring(i, i+1);
String newLetterPool = (letterPool.substring(0, i) + letterPool.substring(i+1));
if(dict.contains(curr)){
possibleWords.add(curr);
}
boolean prefixInDic = binarySearch(curr);
if( !prefixInDic ){
break;
} else {
getAllWords(newLetterPool, curr);
}
}
} else {
//Every time we add a letter to currWord, delete from letterPool
//Attach new letter to curr and then check if in dict
for(int i=0; i<letterPool.length(); i++){
String curr = currWord + letterPool.substring(i, i+1);
String newLetterPool = (letterPool.substring(0, i) + letterPool.substring(i+1));
if(dict.contains(curr)) {
possibleWords.add(curr);
}
boolean prefixInDic = binarySearch(curr);
if( !prefixInDic ){
break;
} else {
getAllWords(newLetterPool, curr);
}
}
}
private static boolean binarySearch(String word){
int max = dict.size() - 1;
int min = 0;
int currIndex = 0;
boolean result = false;
while(min <= max) {
currIndex = (min + max) / 2;
if (dict.get(currIndex).startsWith(word)) {
result = true;
break;
} else if (dict.get(currIndex).compareTo(word) < 0) {
min = currIndex + 1;
} else if(dict.get(currIndex).compareTo(word) > 0){
max = currIndex - 1;
} else {
result = true;
break;
}
}
return result;
}
The simplest way to speed up your algorithm is probably to use a Trie (a prefix tree)
Trie data structures offer two relevant methods. isWord(String) and isPrefix(String), both of which take O(n) comparisons to determine whether a word or prefix exist in a dictionary (where n is the number of letters in the argument). This is really fast because it doesn't matter how large your dictionary is.
For comparison, your method for checking if a prefix exists in your dictionary using binary search is O(n*log(m)) where n is the number of letters in the string and m is the number of words in the dictionary.
I coded up a similar algorithm to yours using a Trie and compared it to the code you posted (with minor modifications) in a very informal benchmark.
With 20-char input, the Trie took 9ms. The original code didn't complete in reasonable time so I had to kill it.
Edit:
As to why your code doesn't return all hints, you don't want to break if the prefix is not in your dict. You should continue to check the next prefix instead.
Is there a recommended, faster algorithm?
See Wikipedia article on "String searching algorithm", in particular the section named "Algorithms using a finite set of patterns", where "finite set of patterns" is your dictionary.
The Aho–Corasick algorithm listed first might be a good choice.
I implemented a wordcount program with Java. Basically, the program takes a large file (in my tests, I used a 10 gb data file that contained numbers only), and counts the number of times each 'word' appears - in this case, a number (23723 for example might appear 243 times in the file).
Below is my implementation. I seek to improve it, with mainly performance in mind, but a few other things as well, and I am looking for some guidance. Here are a few of the issues I wish to correct:
Currently, the program is threaded and works properly. However, what I do is pass a chunk of memory (500MB/NUM_THREADS) to each thread, and each thread proceeds to wordcount. The problem here is that I have the main thread wait for ALL the threads to complete before passing more data to each thread. It isn't too much of a problem, but there is a period of time where a few threads will wait and do nothing for a while. I believe some sort of worker pool or executor service could solve this problem (I have not learned the syntax for this yet).
The program will only work for a file that contains integers. That's a problem. I struggled with this a lot, as I didn't know how to iterate through the data without creating loads of unused variables (using a String or even StringBuilder had awful performance). Currently, I use the fact that I know the input is an integer, and just store the temporary variables as an int, so no memory problems there. I want to be able to use some sort of delimiter, whether that delimiter be a space, or several characters.
I am using a global ConcurrentHashMap to story key value pairs. For example, if a thread finds a number "24624", it searches for that number in the map. If it exists, it will increase the value of that key by one. The value of the keys at the end represent the number of occurrences of that key. So is this the proper design? Would I gain in performance by giving each thread it's own hashmap, and then merging them all at the end?
Is there any other way of seeking through a file with an offset without using the class RandomAccessMemory? This class will only read into a byte array, which I then have to convert. I haven't timed this conversion, but maybe it could be faster to use something else.
I am open to other possibilities as well, this is just what comes to mind.
Note: Splitting the file is not an option I want to explore, as I might be deploying this on a server in which I should not be creating my own files, but if it would really be a performance boost, I might listen.
Other Note: I am new to java threading, as well as new to StackOverflow. Be gentle.
public class BigCount2 {
public static void main(String[] args) throws IOException, InterruptedException {
int num, counter;
long i, j;
String delimiterString = " ";
ArrayList<Character> delim = new ArrayList<Character>();
for (char c : delimiterString.toCharArray()) {
delim.add(c);
}
int counter2 = 0;
num = Integer.parseInt(args[0]);
int bytesToRead = 1024 * 1024 * 1024 / 2; //500 MB, size of loop
int remainder = bytesToRead % num;
int k = 0;
bytesToRead = bytesToRead - remainder;
int byr = bytesToRead / num;
String filepath = "C:/Users/Daniel/Desktop/int-dataset-10g.dat";
RandomAccessFile file = new RandomAccessFile(filepath, "r");
Thread[] t = new Thread [num];//array of threads
ConcurrentMap<Integer, Integer> wordCountMap = new ConcurrentHashMap<Integer, Integer>(25000);
byte [] byteArray = new byte [byr]; //allocates 500mb to a 2D byte array
char[] newbyte;
for (i = 0; i < file.length(); i += bytesToRead) {
counter = 0;
for (j = 0; j < bytesToRead; j += byr) {
file.seek(i + j);
file.read(byteArray, 0, byr);
newbyte = new String(byteArray).toCharArray();
t[counter] = new Thread(
new BigCountThread2(counter,
newbyte,
delim,
wordCountMap));//giving each thread t[i] different file fileReader[i]
t[counter].start();
counter++;
newbyte = null;
}
for (k = 0; k < num; k++){
t[k].join(); //main thread continues after ALL threads have finished.
}
counter2++;
System.gc();
}
file.close();
System.exit(0);
}
}
class BigCountThread2 implements Runnable {
private final ConcurrentMap<Integer, Integer> wordCountMap;
char [] newbyte;
private ArrayList<Character> delim;
private int threadId; //use for later
BigCountThread2(int tid,
char[] newbyte,
ArrayList<Character> delim,
ConcurrentMap<Integer, Integer> wordCountMap) {
this.delim = delim;
threadId = tid;
this.wordCountMap = wordCountMap;
this.newbyte = newbyte;
}
public void run() {
int intCheck = 0;
int counter = 0; int i = 0; Integer check; int j =0; int temp = 0; int intbuilder = 0;
for (i = 0; i < newbyte.length; i++) {
intCheck = Character.getNumericValue(newbyte[i]);
if (newbyte[i] == ' ' || intCheck == -1) { //once a delimiter is found, the current tempArray needs to be added to the MAP
check = wordCountMap.putIfAbsent(intbuilder, 1);
if (check != null) { //if returns null, then it is the first instance
wordCountMap.put(intbuilder, wordCountMap.get(intbuilder) + 1);
}
intbuilder = 0;
}
else {
intbuilder = (intbuilder * 10) + intCheck;
counter++;
}
}
}
}
Some thoughts on a little of most ..
.. I believe some sort of worker pool or executor service could solve this problem (I have not learned the syntax for this yet).
If all the threads take about the same time to process the same amount of data, then there really isn't that much of a "problem" here.
However, one nice thing about a Thread Pool is it allows one to rather trivially adjust some basic parameters such as number of concurrent workers. Furthermore, using an executor service and Futures can provide an additional level of abstraction; in this case it could be especially handy if each thread returned a map as the result.
The program will only work for a file that contains integers. That's a problem. I struggled with this a lot, as I didn't know how to iterate through the data without creating loads of unused variables (using a String or even StringBuilder had awful performance) ..
This sounds like an implementation issue. While I would first try a StreamTokenizer (because it's already written), if doing it manually, I would check out the source - a good bit of that can be omitted when simplifying the notion of a "token". (It uses a temporary array to build the token.)
I am using a global ConcurrentHashMap to story key value pairs. .. So is this the proper design? Would I gain in performance by giving each thread it's own hashmap, and then merging them all at the end?
It would reduce locking and may increase performance to use a separate map per thread and merge strategy. Furthermore, the current implementation is broken as wordCountMap.put(intbuilder, wordCountMap.get(intbuilder) + 1) is not atomic and thus the operation might under count. I would use a separate map simply because reducing mutable shared state makes a threaded program much easier to reason about.
Is there any other way of seeking through a file with an offset without using the class RandomAccessMemory? This class will only read into a byte array, which I then have to convert. I haven't timed this conversion, but maybe it could be faster to use something else.
Consider using a FileReader (and BufferedReader) per thread on the same file. This will avoid having to first copy the file into the array and slice it out for individual threads which, while the same amount of total reading, avoids having to soak up so much memory. The reading done is actually not random access, but merely sequential (with a "skip") starting from different offsets - each thread still works on a mutually exclusive range.
Also, the original code with the slicing is broken if an integer value was "cut" in half as each of the threads would read half the word. One work-about is have each thread skip the first word if it was a continuation from the previous block (i.e. scan one byte sooner) and then read-past the end of it's range as required to complete the last word.
I am reading a data in byte-by-byte. when i have determined that i have an entire message, i need to pass it to another function as a string. Some messages can be quite large, but the sizes vary frequently. Which implementation do you all feel is most effecient:
public test class
{
char[] buffer = new char[MAX_SIZE_7200];
int bufferIndex = 0;
void parseData(ArrayList<Byte> msg, length)
{
while (!msg.isEmpty())
{
buffer[bufferIndex++] = (char) msg.remove(0);
if (isfullmessage)
{
parseData(new String(buffer, 0, bufferIndex);
bufferIndex = 0; //restart and continue parsing data
}
}
}
}
OR:
public test class
{
List<Character> buffer = new ArrayList<Character>();
int bufferIndex = 0;
void parseData(ArrayList<Byte> msg, length)
{
while (!msg.isEmpty())
{
buffer.add((char) msg.remove(0));
if (isfullmessage)
{
StringBuilder builder = new StringBuilder(buffer.size());
for (Character ch: buffer)
{
builder.append(ch);
}
parseData(builder.toString());
buffer.clear();
}
}
}
}
OR:
public test class
{
StringBuilder buffer = new StringBuilder();
int bufferIndex = 0;
void parseData(ArrayList<Byte> msg, length)
{
while (!msg.isEmpty())
{
buffer.append((char) msg.remove(0));
if (isfullmessage)
{
parseData(builder.toString());
buffer.clear(); //some stringbuilder clear function
}
}
}
}
or is there a more efficient way. Please note that i have the variables holding my finished message outside the scope of the function since i might process data that does not contain a full message and may take multiple executions of the function to get a full message and process it.
Use StringBuilder. It supports appending characters one at a time, expands capacity as needed, and can be reset for reuse purposes.
I think this depends on the average length of your messages. For a more 'full' message, the char will be a better choice since its element type
For the sparse case, the list template will be better as the memory consumption will be lower, despite the overhead of the object presentation
You are iterating over the elements of the ArrayList msg by removing the first element of this ArrayList in a loop. An ArrayList stores all its elements in an array. Removing the first element is slow, because all elements in the array (except the first one) need to be copied.
So maybe the biggest overhead is not in setting a character in an array or appending a character to a StringBuilder, but maybe the biggest overhead is in the repeated calls to msg.remove(0).
You can fix this by using this:
int index = 0;
while (index < msg.length()) {
buffer[bufferIndex++] = (char) msg.get(index);
index++;
// etc.
}
A few days ago I had interview in some big company, name is not required :), and interviewer asked me to find solution to the next task:
Predefined:
There is dictionary of words with unspecified size, we just know that all words in dictionary are sorted (for example by alphabet). Also we have just a one method
String getWord(int index) throws IndexOutOfBoundsException
Needs:
Need to develop algorithm to find some input word in dictionary using java. For this we should implement method
public boolean isWordInTheDictionary(String word)
Limitations:
We cannot change the internal structure of dictionary, we have no access to internal structure, we do not know counts of elements in dictionary.
Issues:
I have developed modified-binary search, and will publish my variant(works variant) of algorithm, but are there another variants with logarithmic complexity? My variant has complexity O(logN).
My variant of implementation:
public class Dictionary {
private static final int BIGGEST_TOP_MASK = 0xF00000;
private static final int LESS_TOP_MASK = 0x0F0000;
private static final int FULL_MASK = 0xFFFFFF;
private String[] data;
private static final int STEP = 100; // for real test step should be Integer.MAX_VALUE
private int shiftIndex = -1;
private static final int LESS_MASK = 0x0000FF;
private static final int BIG_MASK = 0x00FF00;
public Dictionary() {
data = getData();
}
String getWord(int index) throws IndexOutOfBoundsException {
return data[index];
}
public String[] getData() {
return new String[]{"a", "aaaa", "asss", "az", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "test", "u", "v", "w", "x", "y", "z"};
}
public boolean isWordInTheDictionary(String word) {
boolean isFound = false;
int constantIndex = STEP; // predefined step
int flag = 0;
int i = 0;
while (true) {
i++;
if (flag == FULL_MASK) {
System.out.println("Word is not found ... Steps " + i);
break;
}
try {
String data = getWord(constantIndex);
if (null != data) {
int compareResult = word.compareTo(data);
if (compareResult > 0) {
if ((flag & LESS_MASK) == LESS_MASK) {
constantIndex = prepareIndex(false, constantIndex);
if (shiftIndex == 1)
flag |= BIGGEST_TOP_MASK;
} else {
constantIndex = constantIndex * 2;
}
flag |= BIG_MASK;
} else if (compareResult < 0) {
if ((flag & BIG_MASK) == BIG_MASK) {
constantIndex = prepareIndex(true, constantIndex);
if (shiftIndex == 1)
flag |= LESS_TOP_MASK;
} else {
constantIndex = constantIndex / 2;
}
flag |= LESS_MASK;
} else {
// YES!!! We found word.
isFound = true;
System.out.println("Steps " + i);
break;
}
}
} catch (IndexOutOfBoundsException e) {
if (flag > 0) {
constantIndex = prepareIndex(true, constantIndex);
flag |= LESS_MASK;
} else constantIndex = constantIndex / 2;
}
}
return isFound;
}
private int prepareIndex(boolean isBiggest, int constantIndex) {
shiftIndex = (int) Math.ceil(getIndex(shiftIndex == -1 ? constantIndex : shiftIndex));
if (isBiggest)
constantIndex = constantIndex - shiftIndex;
else
constantIndex = constantIndex + shiftIndex;
return constantIndex;
}
private double getIndex(double constantIndex) {
if (constantIndex <= 1)
return 1;
return constantIndex / 2;
}
}
It sounds like the part they really want you to think about is how to handle the fact that you don't know the size of the dictionary. I think they assume that you can give them a binary search. So the real question is how do you manipulate the range of the search as it progresses.
Once you have found a value in the dictionary that is greater than your search target (or out of bounds), the rest looks like standard binary search. The hard part is how do you optimally expand the range when the target value is greater than the dictionary value that you've looked up. It looks like you are expanding by a factor of 1.5. This could be really problematic with a huge dictionary and a small fixed initial step like you have (100). Think if there were 50 million words how many times your algorithm would have to expand the range upwards if you're searching for 'zebra'.
Here's an idea: use the ordered nature of the collection to your advantage by assuming the first letter of each word is evenly distributed amongst the letters of the alphabet (this will never be true, but without knowing more about the collection of words it's probably the best you can do). Then weight the amount of your range expansion by how far from the end you would expect the dictionary word to be.
So if you took your initial step of 100 and looked up the dictionary word at that index and it was 'aardvark', you would expand your range a lot more for the next step than if it was 'walrus.' Still O(log n) but probably much better for most collections of words.
Here is an alternative implementation that uses Collections.binarySearch. It fails if one of the words in the list starts with the Character '\uffff' (that is Unicode 0xffff and not a legal not a valid unicode character).
public static class ListProxy extends AbstractList<String> implements RandomAccess
{
#Override public String get( int index )
{
try {
return getWord( index );
} catch( IndexOutOfBoundsException ex ) {
return "\uffff";
}
}
#Override public int size()
{
return Integer.MAX_VALUE;
}
}
public static boolean isWordInTheDictionary( String word )
{
return Collections.binarySearch( new ListProxy(), word ) >= 0;
}
Update: I modified it so that it implements RandomAccess since the binarySearch in Collections would otherwise use a iterator based search on such a large list which would be extremely slow. This should now however be decently fast since the binary search will need only 31 iterations even though the List pretends to be as large as possible.
Here is a slightly modified version that remembers the smallest failed index to converge its proclaimed size to the actual size of the dictionary en passant and thus avoids almost all exceptions in successive lookups. Although you would need to create a new ListProxy instance whenever the size of the dictionary could have changed.
public static class ListProxy extends AbstractList<String> implements RandomAccess
{
private int size = Integer.MAX_VALUE;
#Override public String get( int index )
{
try {
if( index < size )
return getWord( index );
} catch( IndexOutOfBoundsException ex ) {
size = index;
}
return "\uffff";
}
#Override public int size()
{
return size;
}
}
private static ListProxy listProxy = new ListProxy();
public static boolean isWordInTheDictionary( String word )
{
return Collections.binarySearch( listProxy , word ) >= 0;
}
You have the right idea, but I think your implementation is overly complicated. You want to do a binary search, but you don't know what the upper bound is. So instead of starting at the middle, you start at index 1 (assuming dictionary indexes start at 0).
If the word you're looking for is "less than" the current dictionary word, halve the distance between the current index and your "low" value. ("low" starts at 0, of course).
If the word you're looking for is "greater than" the word at the index you just examined, then either halve the distance between the current index and your "high" value ("high" starts at 2) or, if index and "high" are the same, double the index.
If doubling the index gives you an out of range exception, you halve the distance between the current value and the doubled value. So if going from 16 to 32 throws an exception, try 24. And, of course, keep track of the fact that 32 is more than the max.
So a search sequence might look like 1, 2, 4, 8, 16, 12, 14 - found!
It's the same concept as a binary search, but rather than starting with low = 0, high = n-1, you start with low = 0, high = 2, and double the high value when you need to. It's still O(log N), although the constant is going to be a bit larger than with a "normal" binary search.
You can incur a one-time cost of O(n), if you know that the dictionary will not change. You can add all the words in the dictionary to a hashtable, and then any subsequent calls to isWordInDictionary() will be O(1) (in theory).
Use the getWord() API to copy the entire contents of the dictionary into a more sensible data structure (e.g. hash table, trie, perhaps even augmented by a Bloom filter). ;-)
In a different language:
#!/usr/bin/perl
$t=0;
$cur=1;
$under=0;
$EOL=int(rand(1000000))+1;
$TARGET=int(rand(1000000))+1;
if ($TARGET>$EOL)
{
$x=$EOL;
$EOL=$TARGET;
$TARGET=$x;
}
print "Looking for $TARGET with EOL $EOL\n";
sub testWord($)
{
my($a)=#_;
++$t;
return 0 if ($a eq $TARGET);
return -2 if ($a > $EOL);
return 1 if ($a > $TARGET);
return -1;
}
while ($r = testWord($cur))
{
print "Tested $cur, got $r\n";
if ($r == 1) { $over=$cur; }
if ($r == -1) { $under=$cur; }
if ($r == -2) { $over = $cur; }
if ($over)
{
$cur = int(($over-$under)/2)+$under;
$cur++ if ($cur <= $under);
$cur-- if ($cur >= $over);
}
else
{
$cur *= 2;
}
}
print "Found $TARGET at $r in $t tests\n";
The main benefit of this one is it is a bit simpler to understand. I think it may be more efficient if your first guesses are below the target since I don't think you are taking advantage of the space you have already "searched", but that is just with a quick glance at your code. Since it is looking for numbers for simplicity, it doesn't have to deal with not finding the target, but that is an easy extension.
#Sergii Zagriichuk hope the interview went well. Good luck with that.
I think just as #alexcoco said Binary Search is the answer.
Other options I see are only available if you could extend the dictionary. You could make it slightly better. E.g. You could count the words on each letter, and keep their track this way you would effectively had to work only on a subset of words.
Or yea as guys are saying to entirely implement your own dictionary structure.
I know this doesn't answer you question properly. But I cannot see other possibilities.
BTW would be nice to see your algorithm.
EDIT:
Expanding on my comment under answer of bshields...
#Sergii Zagriichuk even better it would be to remember the last index where we had null (no word), I think. Then at each run you could check if it is still true. If not then expand the range to a 'previous index' obtained by reversing the binary search behaviour, so we have null again. This way you would always adjust the size of the range of your search algorithm, thus adapting to the current state of the dictionary as needed. Plus the changes would have to be significant in order to cause your range adjustment so the adjustment wouldn't have any real negative impact on the algorithm. Also dictionaries tend to be static in nature so this should work :)
On one hand yes you are right with binary search implementation. But on the other hand in case dictionary is static and is not changed between lookups - we could suggest different algorithm. Here we have common problem - string sorting/search is different comparing to sorting/searching int array, so getWord(int i).compareTo(string) is O(min(length0, length1)).
Suppose we have request to find words w0, w1, ... wN, during lookup we could build up a tree with indicies (probably some suffix tree will good enough for this task).
During next lookup request we have following set a1, a2, ... aM, so to decrease average time we could first decrease range by searching position in the tree.
The problem with this implementation is concurrency and memory usage, so next step is implementing strategy to make search tree smaller.
PS: main aim was to check ideas and problems you suggest.
Well i think the info that dictionary is sorted can be utilized in a better way.
Say you are looking for a word "Zebra" , whereas the first guess search resulted in "abcg".
So we can use this info in chossing the second guess index . like in my case the resulted word is starting with a , whereas i am looking for something starting with z. So rather than making a static jump , i can make some calculated jump based on the current result and desired result. So in this way suppose if my next jump takes me to the word "yvu" , i now i am very near , so i will make a rather slow small jump than in the prev case.
Here is my solution.. uses O(logn) operations. First part of the code tries to find a estimate of the length and then the second part takes advantage of the fact that the dictionary is sorted and performs a binary search.
boolean isWordInTheDictionary(String word){
if (word == null){
return false;
}
// estimate the length of the dictionary array
long len=2;
String temp= getWord(len);
while(true){
len = len * 2;
try{
temp = getWord(len);
}catch(IndexOutOfBoundsException e){
// found upped bound break from loop
break;
}
}
// Do a modified binary search using the estimated length
long beg = 0 ;
long end = len;
String tempWrd;
while(true){
System.out.println(String.format("beg: %s, end=%s, (beg+end)/2=%s ", beg,end,(beg+end)/2));
if(end - beg <= 1){
return false;
}
long idx = (beg+end)/2;
tempWrd = getWord(idx);
if(tempWrd == null){
end=idx;
continue;
}
if ( word.compareTo(tempWrd) > 0){
beg = idx;
}
else if(word.compareTo(tempWrd) < 0){
end= idx;
}else{
// found the word..
System.out.println(String.format("getword at index: %s, =%s", idx,getWord(idx)));
return true;
}
}
}
Assuming the dictionary is 0-based, I would decompose the search in two parts.
First, given that the index to parameter to getWord() is an integer, and assuming that the index must be a number between 0 and the maximum positive integer, perform a binary search over that range in order to find the maximum valid index (irrespective of the word values). This operation is O(log N), since is a simple binary search.
Once obtained the size of the dictionary, a second ordinary binary search (again of complexity O(log N)) will bring on the desired answer.
Since O(log N)+O(log N) is O(log N), this algorithm complies with your requirement.
I'm in a hiring proccess which asked me this same problem...
My approach was a bit different, and considering the dictionary (webservice) I have, it's about 30% more efficient (for the words I've tested).
Here is the solution:
https://github.com/gustavompo/wordfinder
I'll not post the whole solution here because it's decoupled through classes and methods, but the core algorithm is this:
public WordFindingResult FindWord(string word)
{
var callsCount = 0;
var lowerLimit = new WordFindingLimit(0, null);
var upperLimit = new WordFindingLimit(int.MaxValue, null);
var wordToFind = new Word(word);
var wordIndex = _initialIndex;
while (callsCount <= _maximumCallsCount)
{
if (CouldNotFindWord(lowerLimit, upperLimit))
return new WordFindingResult(callsCount, -1, string.Empty, WordFindingResult.ErrorCodes.NOT_FOUND);
var wordFound = RetrieveWordAt(wordIndex);
callsCount++;
if (wordToFind.Equals(wordFound))
return new WordFindingResult(callsCount, wordIndex, wordFound.OriginalWordString);
else if (IsIndexTooHigh(wordToFind, wordFound))
{
upperLimit = new WordFindingLimit(wordIndex, wordFound);
wordIndex = IndexConsideringTooHighPreviousResult(lowerLimit, wordIndex);
}
else
{
lowerLimit = new WordFindingLimit(wordIndex, wordFound);
wordIndex = IndexConsideringTooLowPreviousResult(lowerLimit, upperLimit, wordToFind);
}
}
return new WordFindingResult(callsCount, -1, string.Empty, WordFindingResult.ErrorCodes.CALLS_LIMIT_EXCEEDED);
}
private int IndexConsideringTooHighPreviousResult(WordFindingLimit maxLowerLimit, int current)
{
return BinarySearch(maxLowerLimit.Index, current);
}
private int IndexConsideringTooLowPreviousResult(WordFindingLimit maxLowerLimit, WordFindingLimit minUpperLimit, Word target)
{
if (AreLowerAndUpperLimitsDefined(maxLowerLimit, minUpperLimit))
return BinarySearch(maxLowerLimit.Index, minUpperLimit.Index);
var scoreByIndexPosition = maxLowerLimit.Index / maxLowerLimit.Word.Score;
var indexOfTargetBasedInScore = (int)(target.Score * scoreByIndexPosition);
return indexOfTargetBasedInScore;
}