The below given code is taking so much time to execute code correct answer
for(long i=0;i<9223372036854775807;i++){
//code
}
Can anyone tell me any alternative approach or correction of this version?
The one change you could make is to use the constant Long.MAX_VALUE instead.
But don't expect that to change anything. That loop still takes all the time it takes to iterate that range.
In other words: how you express "I want to iterate 2 to the power of 63 minus 1" times doesn't influence the time required to do that at all.
The only thing to cut runtime is by slicing the range and have multiple smaller loops go in parallel. But of course: without any detail on the loop body it is not possible to determine whether parallelism is applicable here.
And of course, the real question would be about the actual problem that you intend to solve by iterating code for millions, billions of years.
from this answer https://stackoverflow.com/a/15505663/7806805
and since your question is
The below given code is taking so much time to execute code correct answer
If you were executing your function once per nanosecond, it would still take over 292 years to encounter this situation according to this source.
so its natural for it to take time
I suggest to use Long.MAX_VALUE as a loop bound:
for(long i=0L;i<Long.MAX_VALUE;i++){
//code
}
For a problem I'm designing in Java, I'm given a list of dates and winning lottery numbers. I'm supposed to do things with them, and spit them back out in order. I decided to choose a LinkedHashMap to solve it, the Date containing the date, and int[] containing the array of winning numbers.
Thing is, when I run the .values() function, I'm noticing the numbers are no longer ordered (by insertion). The code I'm running is:
for(int i = 0; i < 30; i++){ //testing first 30 to see if ordered
System.out.println(Arrays.toString((int [])(winningNumbers.values().toArray()[i])));
}
Can anyone see what exactly I'm doing wrong? Tempted almost to just use .get() and iterate through the dates, since the dates do go in some order, but that might make using a LinkedHashMap moot. Might as well be using a 2-D ArrayList[][] in that case. Thanks in advance!
EDIT: Adding code for further examination!
Lottery Class: http://pastebin.com/9ezF5U7e
Text file: http://pastebin.com/iD8jm7f8
To test, call checkOldLTNums(). Just pass it any int[] array, it doesn't utilize it, at least relevant to this problem. The output is different from the first lines in the .txt, which is organized. Thanks!
EDIT2:
I think I figured out why it fails. I used a smaller .txt file, and the code worked perfectly. It might be that it isn't wise to load 1900 entries of stuff into memory. I suppose it's better to just load individual lines and compare them instead of grabbing everything at once. Is this logic sound? Any advice going from here would be useful.
The values() method will return the Map's values in the same order they were inserted, that's what ordered means, don't confuse it with sorted, that means a different thing - that the items follow the ordering defined by the value's Comparator or its natural ordering if no comparator was provided.
Perhaps you'd be better off using a SortedMap that keeps the items sorted by key. To summarise: LinkedHashMap is ordered and TreeMap is sorted.
Try using a TreeMap instead. You should get the values in ascending key order that way.
I can't believe I missed this. I did some more troubleshooting, and realized some of the Date keys were replacing others. After doing some testing, I finally realized that the formatting I was using was off: "dd/MM/yyyy" instead of "MM/dd/yyyy". That's what I get for copy-pasting, haha. Thanks to all who helped!
I have the need to take an ArrayList<Conference> conferences, where Conference contains a public Date beginDate parameter, and sort as follows: first, separate conferences into buckets which represent unique months for beginDate, and then sort on beginDate itself within the buckets. I'm sure this is a common need so I was hoping someone here would have some tips.
My idea for this follows. Please tell me why it is sub-optimal :)
Create a HashMap<Date, ArrayList<Conference>>.
Iterate over conferences and use a special static function to find the first day of the month of their beginDate, check if there is an ArrayList<Conference> for that Date. then add them to the ArrayList for that Date (which should all be the same because first_day_of_month(any_day_in_month) is the same.
Iterate over each ArrayList member of the HashMap and use a standard sorting procedure to sort the ArrayList by date.
This seems more complicated than necessary, but please let me know why it's bad and what can be done to fix it.
Edit: Also, if it matters, I will eventually need to add all of those ArrayList back into an ArrayAdapter which will go in commonsware's MergeAdapter... :(
If you sort by the date from the very beginning the month's entries will be subsequent either way. After the initial sorting you can iterate over all entries and make artificial "split" if an entry is the first one of new month. I am not even sure you will need to do such differentiation (maybe becasue the question is a bit vague on this).
The total complexity of the proposed algorithm is O(nlog n), where n is the number of elements, and of course there is no better solution.
NOTE Btw this algorithm is better than the one proposed by you in operations complexity.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Android (method for checking an arrays for repeated numbers?)
I've just asked a question and got a few answers and i was very happy to, but there were very complicated answers, I'm quite new to android so can some one maybe give me some example code or some think explained not the complicated. I've tried there code and tried to make sense of it but i cant.
here is the question....
could any one help me. i am making an app, and in the java, numbers are send to a int array and i need to check if any of the numbers in the array repeated and if there are to call a method or something like that. Is there a method to check this or something similar? or would i have to do it using loops and if statements, which i have tried but is getting a bit long and confusing. Any advice would be great, thanks.
int test[] = {0,0,0,0,0,0,0}; (The Array)
(A method to check if any of the arrays numbers are repeated)
First don't make double topic.
Second you are searching for a Java answer not related to Android.
I think that maybe it's better if you first learn java (or other language like).
I would store the items in a Set if you do not want them to repeat. If add returns false then you have a repeating number
Set uniqueItems = new HashSet();
for(int i=0;i<test.length;i++)
if(!uniqueItems.add(test[a]))
System.out.println("The item is already in the set");
First, sort the array. Then search through the array comparing each node to the node on either of it's sides. Or you could store the data in a Set which cannot have duplicates.
Arrays.asList(test).contains(valueYouWantCheck).
If you want to find out for each and every value in test array, Yes I think you need to loop the array.
I'm programming a java application that reads strictly text files (.txt). These files can contain upwards of 120,000 words.
The application needs to store all +120,000 words. It needs to name them word_1, word_2, etc. And it also needs to access these words to perform various methods on them.
The methods all have to do with Strings. For instance, a method will be called to say how many letters are in word_80. Another method will be called to say what specific letters are in word_2200.
In addition, some methods will compare two words. For instance, a method will be called to compare word_80 with word_2200 and needs to return which has more letters. Another method will be called to compare word_80 with word_2200 and needs to return what specific letters both words share.
My question is: Since I'm working almost exclusively with Strings, is it best to store these words in one large ArrayList? Several small ArrayLists? Or should I be using one of the many other storage possibilities, like Vectors, HashSets, LinkedLists?
My two primary concerns are 1.) access speed, and 2.) having the greatest possible number of pre-built methods at my disposal.
Thank you for your help in advance!!
Wow! Thanks everybody for providing such a quick response to my question. All your suggestions have helped me immensely. I’m thinking through and considering all the options provided in your feedback.
Please forgive me for any fuzziness; and let me address your questions:
Q) English?
A) The text files are actually books written in English. The occurrence of a word in a second language would be rare – but not impossible. I’d put the percentage of non-English words in the text files at .0001%
Q) Homework?
A) I’m smilingly looking at my question’s wording now. Yes, it does resemble a school assignment. But no, it’s not homework.
Q) Duplicates?
A) Yes. And probably every five or so words, considering conjunctions, articles, etc.
Q) Access?
A) Both random and sequential. It’s certainly possible a method will locate a word at random. It’s equally possible a method will want to look for a matching word between word_1 and word_120000 sequentially. Which leads to the last question…
Q) Iterate over the whole list?
A) Yes.
Also, I plan on growing this program to perform many other methods on the words. I apologize again for my fuzziness. (Details do make a world of difference, do they not?)
Cheers!
I would store them in one large ArrayList and worry about (possibly unnecessary) optimisations later on.
Being inherently lazy, I don't think it's a good idea to optimise unless there's a demonstrated need. Otherwise, you're just wasting effort that could be better spent elsewhere.
In fact, if you can set an upper bound to your word count and you don't need any of the fancy List operations, I'd opt for a normal (native) array of string objects with an integer holding the actual number. This is likely to be faster than a class-based approach.
This gives you the greatest speed in accessing the individual elements whilst still retaining the ability to do all that wonderful string manipulation.
Note I haven't benchmarked native arrays against ArrayLists. They may be just as fast as native arrays, so you should check this yourself if you have less blind faith in my abilities than I do :-).
If they do turn out to be just as fast (or even close), the added benefits (expandability, for one) may be enough to justify their use.
Just confirming pax assumptions, with a very naive benchmark
public static void main(String[] args)
{
int size = 120000;
String[] arr = new String[size];
ArrayList al = new ArrayList(size);
for (int i = 0; i < size; i++)
{
String put = Integer.toHexString(i).toString();
// System.out.print(put + " ");
al.add(put);
arr[i] = put;
}
Random rand = new Random();
Date start = new Date();
for (int i = 0; i < 10000000; i++)
{
int get = rand.nextInt(size);
String fetch = arr[get];
}
Date end = new Date();
long diff = end.getTime() - start.getTime();
System.out.println("array access took " + diff + " ms");
start = new Date();
for (int i = 0; i < 10000000; i++)
{
int get = rand.nextInt(size);
String fetch = (String) al.get(get);
}
end = new Date();
diff = end.getTime() - start.getTime();
System.out.println("array list access took " + diff + " ms");
}
and the output:
array access took 578 ms
array list access took 907 ms
running it a few times the actual times seem to vary some, but generally array access is between 200 and 400 ms faster, over 10,000,000 iterations.
If you will access these Strings sequentially, the LinkedList would be the best choice.
For random access, ArrayLists have a nice memory usage/access speed tradeof.
My take:
For a non-threaded program, an Arraylist is always fastest and simplest.
For a threaded program, a java.util.concurrent.ConcurrentHashMap<Integer,String> or java.util.concurrent.ConcurrentSkipListMap<Integer,String> is awesome. Perhaps you would later like to allow threads so as to make multiple queries against this huge thing simultaneously.
If you're going for fast traversal as well as compact size, use a DAWG (Directed Acyclic Word Graph.) This data structure takes the idea of a trie and improves upon it by finding and factoring out common suffixes as well as common prefixes.
http://en.wikipedia.org/wiki/Directed_acyclic_word_graph
Use a Hashtable? This will give you your best lookup speed.
ArrayList/Vector if order matters (it appears to, since you are calling the words "word_xxx"), or HashTable/HashMap if it doesn't.
I'll leave the exercise of figuring out why you would want to use an ArrayList vs. a Vector or a HashTable vs. a HashMap up to you since I have a sneaking suspicion this is your homework. Check the Javadocs.
You're not going to get any methods that help you as you've asked for in the examples above from your Collections Framework class, since none of them do String comparison operations. Unless you just want to order them alphabetically or something, in which case you'd use one of the Tree implementations in the Collections framework.
How about a radix tree or Patricia trie?
http://en.wikipedia.org/wiki/Radix_tree
The only advantage of a linked list over an array or array list would be if there are insertions and deletions at arbitrary places. I don't think this is the case here: You read in the document and build the list in order.
I THINK that when the original poster talked about finding "word_2200", he meant simply the 2200th word in the document, and not that there are arbitrary labels associated with each word. If so, then all he needs is indexed access to all the words. Hence, an array or array list. If there really is something more complex, if one word might be labeled "word_2200" and the next word is labeled "foobar_42" or some such, then yes, he'd need a more complex structure.
Hey, do you want to give us a clue WHY you want to do any of this? I'm hard pressed to remember the last time I said to myself, "Hey, I wonder if the 1,237th word in this document I'm reading is longer or shorter than the 842nd word?"
Depends on what the problem is - speed or memory.
If it's memory, the minimum solution is to write a function getWord(n) which scans the whole file each time it runs, and extracts word n.
Now - that's not a very good solution. A better solution is to decide how much memory you want to use: lets say 1000 items. Scan the file for words once when the app starts, and store a series of bookmarks containing the word number and the position in the file where it is located - do this in such a way that the bookmarks are more-or-less evenly spaced through the file.
Then, open the file for random access. The function getWord(n) now looks at the bookmarks to find the biggest word # <= n (please use a binary search), does a seek to get to the indicated location, and scans the file, counting the words, to find the requested word.
An even quicker solution, using rather more memnory, is to build some sort of cache for the blocks - on the basis that getWord() requests usually come through in clusters. You can rig things up so that if someone asks for word # X, and its not in the bookmarks, then you seek for it and put it in the bookmarks, saving memory by consolidating whichever bookmark was least recently used.
And so on. It depends, really, on what the problem is - on what kind of patterns of retreival are likely.
I don't understand why so many people are suggesting Arraylist, or the like, since you don't mention ever having to iterate over the whole list. Further, it seems you want to access them as key/value pairs ("word_348"="pedantic").
For the fastest access, I would use a TreeMap, which will do binary searches to find your keys. Its only downside is that it's unsynchronized, but that's not a problem for your application.
http://java.sun.com/javase/6/docs/api/java/util/TreeMap.html