I need to sort a queue using only one other queue, final number of variables and only with "isEmpty","enqueue","dequeue".
I'v tried to do this for a day and I just didn't figured it out yet..I searched all over the place and there is no question with that kind of restrictions, do you think that it possible?
if it's in java , this is a normal bubble sort.
public void sort(Queue<Integer> q){
//save first num into copy
Queue<Integer> copy = new Queue<Integer>();
int length=0;
while(!q.isEmpty()){
copy.add(q.remove());
length ++;
}
int maxNum, temp;
while(!copy.isEmpty()){
maxNum=copy.remove();
temp=0;
copy.add(maxNum);
//search for the max number
for(int i=0; i<length-1; i++){
temp = copy.remove();
if(temp > maxNum) maxNum= temp;
copy.add(temp);
}
//delete it from copy and add it to q
for(int i=0; i<length; i++){
temp = copy.remove();
if(temp == max){
q.add(max);
break;
}
copy.add(temp);
}
length --;
}
}
it should work.
so , explanation:
count q and save it in copy.
now you can irritate through copy knowing how many values there are in it.
so search for the max number , insert it into q and remove from it copy.
after you've done that , decrement length , because
now copy has one less element and repeat until length==0
and q is sorted.
the algorithm that sorts q and takes two more queues goes like this:
foreach element in q
while inserting q into a copy , find it's max number.
make sure you somehow inserted q into copy without the max number
and then insert it into a new sorted queue.
after you've done that , reinsert copy into q and repeat until q is empty
at the end.
the whole point in that algorithm is that we use
one extra queue to irritate through q.
and another in order to save the result while irritating.
because if we won't use another queue to irritate through q
we won't be able to know when to stop irritating.
but if we know when to stop irritating through q (knowing it's length)
we don't really need that extra queue, so we can use only one other queue.
I don't think you even need another queue. give me
half of an hour and I will program you an algorithm using
only the queue we got.
when you sort an array , using the
types of sorting algorithms that we have currently have,
in order to sort it in less than O(n^2), you have to use memory of O(n);
which is the same as saying you will NEVER use a final number of variables
if you want less than O(n^2).
you wanted it to use a final number of variables , so I automatically assumed that you just needed a simple O(n^2) algorithm.
I think you can do merge sort. I'll do it tommorow.
What is final number of variables ? Missing from the problem statement is how the front elements of each queue can be compared. Is there a size function that returns the number of elements in a queue?
For 2 queues, a bottom up merge sort can be implemented, but requires a few variables for each queue to keep track of a logical boundary in each queue. The logical boundary effectively splits each queue into two parts, a front (old) part and a back (new) part. This is a sort with time complexity of O(n log(n)).
Call the two queues A and B, with all the elements on A. If there is no size function, copy A to B to generate a count of elements.
Separate the elements evenly between A and B (one to A, one to B, ... ).
The merge sort repeatedly merges a run from A and B, alternating the merged run output between A and B. On the first pass, runs of size 1 are merged to form runs of size 2, the next pass runs of size 2 are merged to form runs of size 4, and so on, until the final pass where 2 runs are merged onto a single queue.
Related
I am developing an agent-based model in Java. I have used a profiler to reduce any inefficiencies down to the point that the only thing holding it back is Java's Collections.shuffle().
The agents (they're animals) in my model need to be processed in a random order so that no agent is consistently processed before the others.
I am looking for: Either a faster way to shuffle than Java's Collections.shuffle() or an alternative method of processing the elements in an ArrayList in a randomized order that is significantly faster. If you know of a data structure that would be faster than an ArrayList, by all means please answer. I have considered LinkedList and ArrayDeque, but they aren't making much of a difference.
Currently, I have over 1,000,000 elements in the list I am trying to shuffle. Over time, this amount increases and it is becoming increasingly inefficient to shuffle it.
Is there an alternative data structure or way of randomizing the processing of elements that is faster?
I only need to be able to store elements and process them in a randomized order. I do not use contains or anything more complex than storage and iterating over them.
Here is some sample code to better explain what I am trying to achieve:
UPDATE: Sorry for the ConcurrentModificationException, I didn't realize I had done that and I didn't intend to confuse anyone. Fixed it in the code below.
ArrayList<Agent> list = new ArrayList<>();
void process()
{
list.add(new Agent("Zebra"));
Random r = new Random();
for (int i = 0; i < 100000; i++)
{
ArrayList<Agent> newlist = new ArrayList<>();
Collections.shuffle(list);//Something that will allow the order to be random (random quality does not matter to me), yet faster than a shuffle
for (String str : list)
{
newlist.add(str);
if(r.nextDouble() > 0.99)//1% chance of adding another agent to the list
{
newlist.add(new Agent("Lion"));
}
}
list = newlist;
}
}
ANOTHER UPDATE
I thought about doing list.remove(rando.nextInt(list.size()) but since remove for ArrayLists is O(n) it would be even worse to do that rather than shuffle for such a large list size.
I would use a simple ArrayList and not shuffle it at all. Instead select random list indices to process. To avoid processing a list element twice, I'd remove the processed elements from the list.
Now if the list is very large, removing a random entry itself would be the bottleneck. This can however be avoided easily by removing the last entry instead and moving it into the place the selected entry occupied before:
public String pullRandomElement(List<String> list, Random random) {
// select a random list index
int size = list.size();
int index = random.nextInt(size);
String result = list.get(index);
// move last entry to selected index
list.set(index, list.remove(size - 1));
return result;
}
Needless to say you should chose a list implementation where get(index) and remove(lastIndex) are fast O(1), such as ArrayList. You may also want to add edge case handling (such as list is empty).
You could use this: If you already have the list of items, generate a random according to its size and get nextInt.
ArrayList<String> list = new ArrayList<>();
int sizeOfCollection = list.size();
Random randomGenerator = new Random();
int randomId = randomGenerator.nextInt(sizeOfCollection);
Object x = list.get(randomId);
list.remove(randomId);
Since your code doesn't actually depend on the order of the list, it's enough to shuffle it once at the end of the processing.
void process() {
Random r = new Random();
for (int i = 0; i < 100000; i++) {
for (String str : list) {
if(r.nextDouble() > 0.9) {
list.add(str + str);
}
}
}
Collections.shuffle(list);
}
Though this would still throw a ConcurrentModificationException, like the original code.
Collections.shuffle() uses the modern variant of Fisher-Yates Algorithm:
From https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle
To shuffle an array a of n elements (indices 0..n-1):
for i from n − 1 downto 1 do
j ← random integer such that 0 ≤ j ≤ i
exchange a[j] and a[i]
Colections.shuffle convert the list to an array, then does the shuffle, just using random.nextInt() and then copies everything back. (see http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/Collections.java#Collections.shuffle%28java.util.List%29)
You make this only faster by avoiding the overhead of copying the array and writing back:
Either write your own implementation of ArrayList where you can directly access the backing array, or access the field "elementData" of your ArrayList via reflection.
Now use the the same algorithm as Collections.shuffle on that array, using the correct size().
This speeds up, because it avoid the copying if the whole array, like Collection.shuffle() does:
The access via reflection needs a bit time, so this solution is faster only for higher number of elements.
I would not reccomend this solution unless you want to win the race, having the fastes shuffle, also by means of execution time.
And as always when comparing speeds, make sure you warm up the VM by running the algorithm to be measured 1000 times before starting the measuring.
According to the documentation, Collections.shuffle() runs in O(N) time.
This method runs in linear time. If the specified list does not implement the RandomAccess interface and is large, this implementation dumps the specified list into an array before shuffling it, and dumps the shuffled array back into the list. This avoids the quadratic behavior that would result from shuffling a "sequential access" list in place.
I recommend you use the public static void shuffle(List<?> list, Random rnd) overload, although the performance benefit will probably be negligible.
Improving the performance will be difficult unless you allow some bias, such as with partial shuffling (only a segment of the list gets re-shuffled each time) or under-shuffling. Under-shuffling means writing your own Fisher-Yates routine and skipping certain list indices during the reverse traversal; for example, you could skip all odd indices. However the end of your list would receive less shuffling than the front which is another form of bias.
If you had a fixed list size M, you might consider caching some large number N of different fixed index permutations (0 to M-1 in random order) in memory at application startup. Then you could just randomly select one of these pre-orderings whenever you iterate the collection and just iterate according to that particular previously defined permutation. If N were large (say 1000 or more), the overall bias would be small (and also relatively uniform) and would be very fast. However you noted your list slowly grows, so this approach wouldn't be viable.
I have a long sequence of order amounts, 1000, 3000, 50000, 1000000, etc.
I want to find out all orders whose amount is more than 100000.
Simplest solution can be iterate through full list and compare and put matched on into another list which will give you all whose amount > 100000.
Do we have any other data structure or algorithm approach which can solve this faster?
The sequence of input elements in unordered.
The approach that comes to mind could be to keep an ordered list of the orders and perform a binary search for the limit amount. All orders before that point in the ordered list will be less than the limit amount.
// Generate a random array of numbers
Random r = new Random();
Integer[] numbers = new Integer[r.nextInt(100) + 20];
for (int i = 0; i < numbers.length; i++) {
numbers[i] = r.nextInt(50000) + 100;
}
// Order the array and print it
Arrays.sort(numbers);
System.out.println(Arrays.toString(numbers));
// Find the index of the amount limit (10000 in this case)
int index = Arrays.binarySearch(numbers, 10000);
// Print the slice of array that is lower than the limit amount
System.out.println(Arrays.toString(Arrays.copyOfRange(numbers, 0, Math.abs(index) + 1)));
It depends on how many times you're going to go over the order sequence, and whether it gets modified a lot.
If you're only going to do this once, simple go over the entire sequence and count. It'll be faster than building any other data structure, as you'll have to go over all the orders anyway.
Also, traversing a list of one million items is going to be pretty fast even if you do it repeatedly. Are you sure you have a performance problem here?
There are a lot of data structures which can give you a better search time like Tree, HashSet ...etc.
But if you always want get the orders amounting more than 100k, then you can very easily have 2 lists (one for orders < 100k and the other for > 100k). But this is very specific to a case where search amount is constant always and not a general solution.
I have a really big vector that stores 100000 different values,ranging from 0 to 50000.
They represent the cylinders on a hard disk,and I want to sort this vector according to three different algorithms used for disk scheduling.
So far,I read those 100000 values from a file,store them into a vector and then sort them according to the desired algorithm(FCFS,SCAN,SSTF).The problem is,it takes too long,because I'm doing it in the least creative way possible:
public static Vector<Integer> sortSSTF(Vector<Integer> array){
Vector<Integer> positions = new Vector<Integer>(array);
Vector<Integer> return_array = new Vector<Integer>();
int current_pos = 0,minimum,final_pos;
while(positions.size() > 0){
minimum = 999999;
final_pos = current_pos;
for(int i=0 ; i < positions.size() ; i++){
//do some math
}
}
return_array.add(final_pos);
current_pos = final_pos;
positions.removeElement(final_pos);
}
return return_array;
}
My function takes a vector as a parameter,makes a copy of it,does some math to find the desired element from the copied array and store him in the other array,that should be ordered according to the selected algorithm.But in a array with N elements,it is taking N! iterations to complete,which is way too much,since the code should do that at least 10 times.
My question is, how can I make this sorting more efficient?
Java already has built-in methods to sort a List very quickly; see Collections.sort.
Vector is old and incurs a performance penalty due to its synchronization overhead. Use a List implementation (for example, ArrayList) instead.
That said, based on the content of your question, it sounds like you're instead having difficulty implementing the Shortest Seek Time First algorithm.
See related question Shortest seek time first algorithm using Comparator.
I don't think you can implement the SSTF or SCAN algorithm if you don't also supply the current position of the head as an argument to your sorting method. Assuming the initial value of current_postion is always 0 will just give you a list sorted in ascending order, in which case your method would look like this:
public static List<Integer> sortSSTF(List<Integer> cylinders) {
List<Integer> result = new ArrayList<Integer>(cylinders);
Collections.sort(result);
return result;
}
But that won't necessarily be a correct Shortest Seek Time First ordering if it's ever possible for current_pos > 0 when you first enter the method. Your algorithm will then probably look something like this:
Collections.sort(positions);
find the indices in positions that contain the nextLowest and nextHighest positions relative to current_pos (or currentPos, if following Java naming conventions)
whichever position is closer, remove that position from positions and add it to return_array (If it was nextLowest, also decrement nextLowestIndex. If it was nextHighest, increment nextHighestIndex)
repeat step 3 until positions is empty
return return_array.
Of course, you'll also need to check for nextLowestIndex < 0 and nextHighestIndex >= positions.size() in step 3.
Note that you don't need the for loop inside of your while loop--but you would use that loop in step 2, before you enter the while loop.
I have a general programming question, that I have happened to use Java to answer. This is the question:
Given an array of ints write a program to find out how many numbers that are not unique are in the array. (e.g. in {2,3,2,5,6,1,3} 2 numbers (2 and 3) are not unique). How many operations does your program perform (in O notation)?
This is my solution.
int counter = 0;
for(int i=0;i<theArray.length-1;i++){
for(int j=i+1;j<theArray.length;j++){
if(theArray[i]==theArray[j]){
counter++;
break; //go to next i since we know it isn't unique we dont need to keep comparing it.
}
}
}
return counter:
Now, In my code every element is being compared with every other element so there are about n(n-1)/2 operations. Giving O(n^2). Please tell me if you think my code is incorrect/inefficient or my O expression is wrong.
Why not use a Map as in the following example:
// NOTE! I assume that elements of theArray are Integers, not primitives like ints
// You'll nee to cast things to Integers if they are ints to put them in a Map as
// Maps can't take primitives as keys or values
Map<Integer, Integer> elementCount = new HashMap<Integer, Integer>();
for (int i = 0; i < theArray.length; i++) {
if (elementCount.containsKey(theArray[i]) {
elementCount.put(theArray[i], new Integer(elementCount.get(theArray[i]) + 1));
} else {
elementCount.put(theArray[i], new Integer(1));
}
}
List<Integer> moreThanOne = new ArrayList<Integer>();
for (Integer key : elementCount.keySet()) { // method may be getKeySet(), can't remember
if (elementCount.get(key) > 1) {
moreThanOne.add(key);
}
}
// do whatever you want with the moreThanOne list
Notice that this method requires iterating through the list twice (I'm sure there's a way to do it iterating once). It iterates once through theArray, and then implicitly again as it iterates through the key set of elementCount, which if no two elements are the same, will be exactly as large. However, iterating through the same list twice serially is still O(n) instead of O(n^2), and thus has much better asymptotic running time.
Your code doesn't do what you want. If you run it using the array {2, 2, 2, 2}, you'll find that it returns 2 instead of 1. You'll have to find a way to make sure that the counting is never repeated.
However, your Big O expression is correct as a worst-case analysis, since every element might be compared with every other element.
Your analysis is correct but you could easily get it down to O(n) time. Try using a HashMap<Integer,Integer> to store previously-seen values as you iterate through the array (key is the number that you've seen, value is the number of times you've seen it). Each time you try to add an integer into the hashmap, check to see if it's already there. If it is, just increment that integers counter. Then, at the end, loop through the map and count the number of times you see a key with a corresponding value higher than 1.
First, your approach is what I would call "brute force", and it is indeed O(n^2) in the worst case. It's also incorrectly implemented, since numbers that repeat n times are counted n-1 times.
Setting that aside, there are a number of ways to approach the problem. The first (that a number of answers have suggested) is to iterate the array, and using a map to keep track of how many times the given element has been seen. Assuming the map uses a hash table for the underlying storage, the average-case complexity should be O(n), since gets and inserts from the map should be O(1) on average, and you only need to iterate the list and map once each. Note that this is still O(n^2) in the worst case, since there's no guarantee that the hashing will produce contant-time results.
Another approach would be to simply sort the array first, and then iterate the sorted array looking for duplicates. This approach is entirely dependent on the sort method chosen, and can be anywhere from O(n^2) (for a naive bubble sort) to O(n log n) worst case (for a merge sort) to O(n log n) average-though-likely case (for a quicksort.)
That's the best you can do with the sorting approach assuming arbitrary objects in the array. Since your example involves integers, though, you can do much better by using radix sort, which will have worst-case complexity of O(dn), where d is essentially constant (since it maxes out at 9 for 32-bit integers.)
Finally, if you know that the elements are integers, and that their magnitude isn't too large, you can improve the map-based solution by using an array of size ElementMax, which would guarantee O(n) worst-case complexity, with the trade-off of requiring 4*ElementMax additional bytes of memory.
I think your time complexity of O(n^2) is correct.
If space complexity is not the issue then you can have an array of 256 characters(ASCII) standard and start filling it with values. For example
// Maybe you might need to initialize all the values to 0. I don't know. But the following can be done with O(n+m) where n is the length of theArray and m is the length of array.
int[] array = new int[256];
for(int i = 0 ; i < theArray.length(); i++)
array[theArray[i]] = array[theArray[i]] + 1;
for(int i = 0 ; i < array.length(); i++)
if(array[i] > 1)
System.out.print(i);
As others have said, an O(n) solution is quite possible using a hash. In Perl:
my #data = (2,3,2,5,6,1,3);
my %count;
$count{$_}++ for #data;
my $n = grep $_ > 1, values %count;
print "$n numbers are not unique\n";
OUTPUT
2 numbers are not unique
I'm seeking to display a fixed number of items on a web page according to their respective weight (represented by an Integer). The List where these items are found can be of virtually any size.
The first solution that comes to mind is to do a Collections.sort() and to get the items one by one by going through the List. Is there a more elegant solution though that could be used to prepare, say, the top eight items?
Just go for Collections.sort(..). It is efficient enough.
This algorithm offers guaranteed n log(n) performance.
You can try to implement something more efficient for your concrete case if you know some distinctive properties of your list, but that would not be justified. Furthermore, if your list comes from a database, for example, you can LIMIT it & order it there instead of in code.
Your options:
Do a linear search, maintaining the top N weights found along the way. This should be quicker than sorting a lengthly list if, for some reason, you can't reuse the sorting results between displaying the page (e.g. the list is changing quickly).
UPDATE: I stand corrected on the linear search necessarily being better than sorting. See Wikipedia article "Selection_algorithm - Selecting k smallest or largest elements" for better selection algorithms.
Manually maintain a List (the original one or a parallel one) sorted in weight order. You can use methods like Collections.binarySearch() to determine where to insert each new item.
Maintain a List (the original one or a parallel one) sorted in weight order by calling Collections.sort() after each modification, batch modifications, or just before display (possibly maintaining a modification flag to avoid sorting an already sorted list).
Use a data structure that maintains sorted weight-order for you: priority queue, tree set, etc. You could also create your own data structure.
Manually maintain a second (possibly weight-ordered) data structure of the top N items. This data structure is updated anytime the original data structure is modified. You could create your own data structure to wrap the original list and this "top N cache" together.
You could use a max-heap.
If your data originates from a database, put an index on that column and use ORDER BY and TOP or LIMIT to fetch only the records you need to display.
Or a priority queue.
using dollar:
List<Integer> topTen = $(list).sort().slice(10).toList();
without using dollar you should sort() it using Collections.sort(), then get the first n items using list.sublist(0, n).
Since you say the list of items from which to extract these top N may be of any size, and so may be large I assume, I'd augment the simple sort() answers above (which are entirely appropriate for reasonably-sized input) by suggesting most of the work here is finding the top N -- then sorting those N is trivial. That is:
Queue<Integer> topN = new PriorityQueue<Integer>(n);
for (Integer item : input) {
if (topN.size() < n) {
topN.add(item);
} else if (item > topN.peek()) {
topN.add(item);
topN.poll();
}
}
List<Integer> result = new ArrayList<Integer>(n);
result.addAll(topN);
Collections.sort(result, Collections.reverseOrder());
The heap here (a min-heap) is at least bounded in size. There's no real need to make a heap out of all your items.
No, not really. At least not using Java's built-in methods.
There are clever ways to get the highest (or lowest) N number of items from a list quicker than an O(n*log(n)) operation, but that will require you to code this solution by hand. If the number of items stays relatively low (not more than a couple of hundred), sorting it using Collections.sort() and then grabbing the top N numbers is the way to go IMO.
Depends on how many. Lets define n as the total number of keys, and m as the number you wish to display.
Sorting the entire thing: O(nlogn)
Scanning the array each time for the next highest number: O(n*m)
So the question is - What's the relation between n to m?
If m < log n, scanning will be more efficient.
Otherwise, m >= log n, which means sorting will be better. (Since for the edge case of m = log n it doesn't actually matter, but sorting will also give you the benefit of, well, sorting the array, which is always nice.
If the size of the list is N, and the number of items to be retrieved is K, you need to call Heapify on the list, which converts the list (which has to be indexable, e.g. an array) into a priority queue. (See heapify function in http://en.wikipedia.org/wiki/Heapsort)
Retrieving an item on the top of the heap (the max item) takes O (lg N) time. So your overall time would be:
O(N + k lg N)
which is better than O (N lg N) assuming k is much smaller than N.
If keeping a sorted array or using a different data structure is not an option, you could try something like the following. The O time is similar to sorting the large array but in practice this should be more efficient.
small_array = big_array.slice( number_of_items_to_find );
small_array.sort();
least_found_value = small_array.get(0).value;
for ( item in big_array ) { // needs to skip first few items
if ( item.value > least_found_value ) {
small_array.remove(0);
small_array.insert_sorted(item);
least_found_value = small_array.get(0).value;
}
}
small_array could be an Object[] and the inner loop could be done with swapping instead of actually removing and inserting into an array.