I am computing intersection of 2 linkedlists, where one linkedlists if of size 'n' and second is of size 'm'. The code below, stores the items of smaller linkedlist in a set. Thus space complexity is O(m), where m < n, aka, m is length of smaller linkedlist.
But, it is possible that 2 linked lists are equal, m = n. So is the complexity O(n) ?
public IntersectionAndUnionLinkedList<T> intersection(IntersectionAndUnionLinkedList<T> list) {
final Set<T> items = new HashSet<>();
Node<T> firstSmall = null;
Node<T> firstBig = null;
if (size <= list.size) {
firstSmall = first;
firstBig = list.first;
} else {
firstSmall = list.first;
firstBig = first;
}
Node<T> n = null;
for (n = firstSmall; n != null; n = n.next) {
items.add(n.item);
}
IntersectionAndUnionLinkedList<T> intersectionlist = new IntersectionAndUnionLinkedList<>();
for (n = firstBig; n != null; n = n.next) {
if (items.contains(n.item)) {
intersectionlist.add(n.item);
}
}
return intersectionlist;
}
Well in case m=n, O(m)=O(n), but it is safe to state that the memory complexity is O(m) since it's the only real factor.
On the other hand, a HashSet<T> can under extreme circumstances be less memory efficient: after all it uses buckets, the buckets can be filled in a bad way. It depends on the exact implementation of a HashMap<T>. Although one still expect linear memory complexity so O(m).
Related
I believe the space complexity is O(n) here, but I'm also storing everything in the lists, so I'm not sure.
I believe it is only O(n) because of the recursions, but because I'm storing everything in lists, maybe this requires more space?
The algorithm just shows the right side view of the binary tree, so if there root = 5, root.left = 3, root.right = 4. Then right side view will be just [5, 4]. If there was not a root.right, then right side view would be [5, 3]
What is the space complexity of this code?
Code:
import java.util.*;
public class TestingJavaCode {
public List<Integer> rightSideView(TreeNode root) {
ArrayList<Integer> res = new ArrayList<>();
List<Integer> leftS = new ArrayList<>();
if (root == null) return res;
res.add(root.val);
leftS.add(root.val);
if (root.right != null) {
for (int i : rightSideView(root.right)) {
res.add(i);
}
}
if (root.left != null) {
for (int j : rightSideView(root.left)) {
leftS.add(j);
}
}
for (int k = 0; k < leftS.size(); k++) {
if (k > res.size() - 1) res.add(leftS.get(k));
}
return res;
}
public static void main(String[] args) {
TestingJavaCode c = new TestingJavaCode();
TreeNode s4 = new TreeNode(1);
s4.left = new TreeNode(2);
s4.right = new TreeNode(3);
s4.left.left = new TreeNode(4);
s4.left.right = new TreeNode(5);
// [1, 3, 5]
System.out.println(c.rightSideView(s4).toString());
}
}
// Definition for a binary tree node.
class TreeNode {
int val;
TreeNode left;
TreeNode right;
TreeNode(int x) {
val = x;
}
}
The complexity is O(n). Although storing data in lists instead of arrays will add a slight overhead, it will only be a small constant. This won't effect the space taken on a exponential scale.
I suppose you see n as number of nodes in that tree. In that case, the recursion will not take O(n) but the tree height as space. If you assume the tree is somewhat balanced the spacecomplexity of the recursion itself is O(log n) (but is O(n) for any tree - a list in the worst case).
But since you are storing every node in a list at the end the spacecomplexity degenerates to O(n) in all cases.
So yes. Spacecomplexity is O(n), but your reasoning was not correct.
I'm researching on how to find k values in the BST that are closest to the target, and came across the following implementation with the rules:
Given a non-empty binary search tree and a target value, find k values in the BST that are closest to the target.
Note:
Given target value is a floating point.
You may assume k is always valid, that is: k ≤ total nodes.
You are guaranteed to have only one unique set of k values in the BST that are closest to the target. Assume that the BST is balanced.
And the idea of the implementation is:
Compare the predecessors and successors of the closest node to the target, we can use two stacks to track the predecessors and successors, then like what we do in merge sort, we compare and pick the closest one to the target and put it to the result list. As we know, inorder traversal gives us sorted predecessors, whereas reverse-inorder traversal gives us sorted successors.
Code:
import java.util.*;
class TreeNode {
int val;
TreeNode left, right;
TreeNode(int x) {
val = x;
}
}
public class ClosestBSTValueII {
List<Integer> closestKValues(TreeNode root, double target, int k) {
List<Integer> res = new ArrayList<>();
Stack<Integer> s1 = new Stack<>(); // predecessors
Stack<Integer> s2 = new Stack<>(); // successors
inorder(root, target, false, s1);
inorder(root, target, true, s2);
while (k-- > 0) {
if (s1.isEmpty()) {
res.add(s2.pop());
} else if (s2.isEmpty()) {
res.add(s1.pop());
} else if (Math.abs(s1.peek() - target) < Math.abs(s2.peek() - target)) {
res.add(s1.pop());
} else {
res.add(s2.pop());
}
}
return res;
}
// inorder traversal
void inorder(TreeNode root, double target, boolean reverse, Stack<Integer> stack) {
if (root == null) {
return;
}
inorder(reverse ? root.right : root.left, target, reverse, stack);
// early terminate, no need to traverse the whole tree
if ((reverse && root.val <= target) || (!reverse && root.val > target)) {
return;
}
// track the value of current node
stack.push(root.val);
inorder(reverse ? root.left : root.right, target, reverse, stack);
}
public static void main(String args[]) {
ClosestBSTValueII cv = new ClosestBSTValueII();
TreeNode root = new TreeNode(53);
root.left = new TreeNode(30);
root.left.left = new TreeNode(20);
root.left.right = new TreeNode(42);
root.right = new TreeNode(90);
root.right.right = new TreeNode(100);
System.out.println(cv.closestKValues(root, 40, 2));
}
}
And my question is, what's the reason for having two stacks and how is in-order a good approach? What's the purpose of each? Wouldn't traversing it with one stack be enough?
And what's the point of having a reverse boolean, such as for inorder(reverse ? ...);? And in the case of if ((reverse && root.val <= target) || (!reverse && root.val > target)), why do you terminate early?
Thank you in advance and will accept answer/up vote.
The idea of the algorithm you found is quite simple. They do just in-order traversal of a tree from the place, where target should be inserted. They use two stacks to store predecessors and successors. Lets take the tree for example:
5
/ \
3 9
/ \ \
2 4 11
Let the target be 8. When all inorder method calls are finished, stacks will be: s1 = {2, 3, 4, 5}, s2 = {11, 9}. As you see, s1 contains all predecessors of target and s2 all successors of it. Moreover, both stacks are sorted in a way, that top of each stack is closer to target, than all other values in stack. As a result, we can easily find kclosest values, just by always comparing tops of the stacks, and popping the closest value until we have k values. The running time of their algorithm is O(n).
Now about your questions. I don't know, how to implement this algorithm using the only stack effectively. The problem with stack is that we have access only to the top of it. But it is extremely easy to implement the algorithm with one array. Lets just do usual in-order traversal of a tree. For my example we will get: arr = {2, 3, 4, 5, 9, 11}. Then lets place l and r indexes to the closest to target values from both of the sides: l = 3, r = 4 (arr[l] = 5, arr[r] = 9). What is left is just to always compare arr[l] and arr[r] and choose what to add to result (absolutely the same, as with two stacks). This algo also takes O(n) operations.
Their approach to the problem seems to me a bit too hard to understand in code, though it is rather elegant.
I'd like to introduce another approach to the problem with another running time. This algorithm will take O(k*logn) time, which is better for small k and worse for bigger ones than previous algorithm.
Lets also store in TreeNode class a pointer to parent node. Then we can find predecessor or successor of any node in tree easily in O(logn) time (if you don't know how). So, lets firstly find in the tree predecessor and successor of the target (without doing any traversals!). Then do the same as with stacks: compare predecessor\successor, choose the closest one, and for the closest go to its predecessor\successor.
I hope, I answered your questions and you understood my explanations. If not, feel free to ask!
The reason why you need two stack is that you must traverse the tree in two directions, and you must compare the current value of each stack with the value you're searching (you may end up having k values greater than the searched value, or k/2 greater and k/2 lower).
I think you should use stacks of TreeNodes rather that stacks of Integer; you could avoid recursion.
UPDATE:
I see two phases in the algorithm:
1) locate the closest value in the tree, that would simultaneously build the initial stack.
2) make a copy of the stack, move back one element, this will give you the second stack; then iterate at most k times: see which of the two elements on top of each stack is the closest to the searched value, add it to the result list, and move the stack forward or backward.
UPDATE 2: A little code
public static List<Integer> closest(TreeNode root, int val, int k) {
Stack<TreeNode> right = locate(root, val);
Stack<TreeNode> left = new Stack<>();
left.addAll(right);
moveLeft(left);
List<Integer> result = new ArrayList<>();
for (int i = 0; i < k; ++i) {
if (left.isEmpty()) {
if (right.isEmpty()) {
break;
}
result.add(right.peek().val);
moveRight(right);
} else if (right.isEmpty()) {
result.add(left.peek().val);
moveLeft(left);
} else {
int lval = left.peek().val;
int rval = right.peek().val;
if (Math.abs(val-lval) < Math.abs(val-rval)) {
result.add(lval);
moveLeft(left);
} else {
result.add(rval);
moveRight(right);
}
}
}
return result;
}
private static Stack<TreeNode> locate(TreeNode p, int val) {
Stack<TreeNode> stack = new Stack<>();
while (p != null) {
stack.push(p);
if (val < p.val) {
p = p.left;
} else {
p = p.right;
}
}
return stack;
}
private static void moveLeft(Stack<TreeNode> stack) {
if (!stack.isEmpty()) {
TreeNode p = stack.peek().left;
if (p != null) {
do {
stack.push(p);
p = p.right;
} while (p != null);
} else {
do {
p = stack.pop();
} while (!stack.isEmpty() && stack.peek().left == p);
}
}
}
private static void moveRight(Stack<TreeNode> stack) {
if (!stack.isEmpty()) {
TreeNode p = stack.peek().right;
if (p != null) {
do {
stack.push(p);
p = p.left;
} while (p != null);
} else {
do {
p = stack.pop();
} while (!stack.isEmpty() && stack.peek().right == p);
}
}
}
UPDATE 3
Wouldn't traversing it with one stack be enough?
And what's the point of having a reverse boolean, such as for
inorder(reverse ? ...);? And in the case of if ((reverse && root.val
<= target) || (!reverse && root.val > target)), why do you terminate
early?
I don't know where you got the solution you gave in you're question from, but to summarize, it builds two lists of Integer, one in straight order, one in reverse order. It terminates "early" when the searched value is reached. This solution sound very inefficient since it requires the traversal of the whole tree. Mine, of course, is much better, and it conforms to the given rules.
I'm doing a project. I need to calculate the complexity of a recursive method. This method is called recursively and uses methods "incomingEdges" and "opposite". Can someone help me to find the complexity of "FUNCTION" method?
public HashMap<String, Integer[]> FUNCTION() {
HashMap<String, Integer[]> times = new HashMap<>();
Integer[] timesAct = new Integer[5];
boolean[] visited = new boolean[graphPertCpm.numVertices()];
Vertex<Activity, String> current = graphPertCpm.getVertex(0);
timesAct[0] = 0;
timesAct[1] = 0;
times.put(current.getElement().getKeyId(), timesAct);
FUNCTION(current, times, visited);
return times;
}
private void FUNCTION(Vertex<Activity, String> current, HashMap<String, Integer[]> times, boolean[] visited) {
if (times.get(current.getElement().getKeyId()) == null) {
for (Edge<Activity, String> inc : graphPertCpm.incomingEdges(current)) {
Vertex<Activity, String> vAdj = graphPertCpm.opposite(current, inc);
FUNCTION(vAdj, times, visited);
}
}
visited[current.getKey()] = true;
for (Entry<Vertex<Activity, String>, Edge<Activity, String>> outs : current.getOutgoing().entrySet()) {
if (!visited[outs.getKey().getKey()]) {
int maxEF = 0;
Vertex<Activity, String> vAdj = graphPertCpm.opposite(current, outs.getValue());
for (Edge<Activity, String> inc : graphPertCpm.incomingEdges(outs.getKey())) {
Integer[] timesAct = times.get(graphPertCpm.opposite(outs.getKey(), inc).getElement().getKeyId());
if (timesAct == null) {
vAdj = graphPertCpm.opposite(vAdj, inc);
FUNCTION(vAdj, times, visited);
} else {
if (timesAct[1] > maxEF) {
maxEF = timesAct[1];
}
}
}
Integer[] timesAct = new Integer[5];
timesAct[0] = maxEF;
timesAct[1] = timesAct[0] + outs.getKey().getElement().getDuration();
times.put(outs.getKey().getElement().getKeyId(), timesAct);
if (visited[vAdj.getKey()] != true) {
FUNCTION(vAdj, times, visited);
}
}
}
visited[current.getKey()] = false;
}
Opposite Method
public Vertex<V, E> opposite(Vertex<V, E> vert, Edge<V, E> e) {
if (e.getVDest() == vert) {
return e.getVOrig();
} else if (e.getVOrig() == vert) {
return e.getVDest();
}
return null;
}
IncomingEdges Method
public Iterable<Edge<V, E>> incomingEdges(Vertex<V, E> v) {
Edge e;
ArrayList<Edge<V, E>> edges = new ArrayList<>();
for (int i = 0; i < numVert; i++) {
for (int j = 0; j < numVert; j++) {
e = getEdge(getVertex(i), getVertex(j));
if (e != null && e.getVDest() == v) {
edges.add(e);
}
}
}
return edges;
}
So, first are you familiar with the concepts of Big-O analysis?
The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:
Constant O(1)
statement;
The running time of the statement will not change in relation to N.
Linear O(n)
for ( i = 0; i < N; i++ )
statement;
The running time of the loop is directly proportional to N. When N doubles, so does the running time.
Quadratic O(n2)
for ( i = 0; i < N; i++ ) {
for ( j = 0; j < N; j++ )
statement;
}
The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.
Logarithmic O(log n)
while ( low <= high ) {
mid = ( low + high ) / 2;
if ( target < list[mid] )
high = mid - 1;
else if ( target > list[mid] )
low = mid + 1;
else break;
}
The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.
Linearithmic O(n log n)
void quicksort ( int list[], int left, int right ){
int pivot = partition ( list, left, right );
quicksort ( list, left, pivot - 1 );
quicksort ( list, pivot + 1, right );
}
Is N * log ( N ). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic (also termed linearithmic).
Note that none of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course.
Your function can be stripped down to two for-loops with recursive calls and one additional for-loop:
for (Edge<Activity, String> inc : graphPertCpm.incomingEdges(current)) {
Vertex<Activity, String> vAdj = graphPertCpm.opposite(current, inc);
FUNCTION(vAdj, times, visited);
for (Entry<Vertex<Activity, String>, Edge<Activity, String>> outs : current.getOutgoing().entrySet()) {
for (Edge<Activity, String> inc : graphPertCpm.incomingEdges(outs.getKey())) {
FUNCTION(vAdj, times, visited);
Then as suggested, consult the Master Theorem
If you need complexity of Graph Operations, Looking at the Big-O Cheat Sheet yields
Alright, here's the lowdown: I'm writing a class in Java that finds the Nth Hardy's Taxi number (a number that can be summed up by two different sets of two cubed numbers). I have the discovery itself down, but I am in desperate need of some space saving. To that end, I need the smallest possible data structure where I can relatively easily use or create a method like contains(). I'm not particularly worried about speed, as my current solution can certainly get it to compute well within the time restrictions.
In short, the data structure needs:
To be able to relatively simply implement a contains() method
To use a low amount of memory
To be able to store very large number of entries
To be easily usable with the primitive long type
Any ideas? I started with a hash map (because I needed to test the values the led to the sum to ensure accuracy), then moved to hash set once I guaranteed reliable answers.
Any other general ideas on how to save some space would be greatly appreciated!
I don't think you'd need the code to answer the question, but here it is in case you're curious:
public class Hardy {
// private static HashMap<Long, Long> hm;
/**
* Find the nth Hardy number (start counting with 1, not 0) and the numbers
* whose cubes demonstrate that it is a Hardy number.
* #param n
* #return the nth Hardy number
*/
public static long nthHardyNumber(int n) {
// long i, j, oldValue;
int i, j;
int counter = 0;
long xyLimit = 2147483647; // xyLimit is the max value of a 32bit signed number
long sum;
// hm = new HashMap<Long, Long>();
int hardyCalculations = (int) (n * 1.1);
HashSet<Long> hs = new HashSet<Long>(hardyCalculations * hardyCalculations, (float) 0.95);
long[] sums = new long[hardyCalculations];
// long binaryStorage, mask = 0x00000000FFFFFFFF;
for (i = 1; i < xyLimit; i++){
for (j = 1; j <= i; j++){
// binaryStorage = ((i << 32) + j);
// long y = ((binaryStorage << 32) >> 32) & mask;
// long x = (binaryStorage >> 32) & mask;
sum = cube(i) + cube(j);
if (hs.contains(sum) && !arrayContains(sums, sum)){
// oldValue = hm.get(sum);
// long oldY = ((oldValue << 32) >> 32) & mask;
// long oldX = (oldValue >> 32) & mask;
// if (oldX != x && oldX != y){
sums[counter] = sum;
counter++;
if (counter == hardyCalculations){
// Arrays.sort(sums);
bubbleSort(sums);
return sums[n - 1];
}
} else {
hs.add(sum);
}
}
}
return 0;
}
private static void bubbleSort(long[] array){
long current, next;
int i;
boolean ordered = false;
while (!ordered) {
ordered = true;
for (i = 0; i < array.length - 1; i++){
current = array[i];
next = array[i + 1];
if (current > next) {
ordered = false;
array[i] = next;
array[i+1] = current;
}
}
}
}
private static boolean arrayContains(long[] array, long n){
for (long l : array){
if (l == n){
return true;
}
}
return false;
}
private static long cube(long n){
return n*n*n;
}
}
Have you considered using a standard tree? In java that would be a TreeSet. By sacrificing speed, a tree generally gains back space over a hash.
For that matter, sums might be a TreeMap, transforming the linear arrayContains to a logarithmic operation. Being naturally ordered, there would also be no need to re-sort it afterwards.
EDIT
The complaint against using a java tree structure for sums is that java's tree types don't support the k-select algorithm. On the assumption that Hardy numbers are rare, perhaps you don't need to sweat the complexity of this container (in which case your array is fine.)
If you did need to improve time performance of this aspect, you could consider using a selection-enabled tree such as the one mentioned here. However that solution works by increasing the space requirement, not lowering it.
Alternately we can incrementally throw out Hardy numbers we know we don't need. Suppose during the running of the algorithm, sums already contains n Hardy numbers and we discover a new one. We insert it and do whatever we need to preserve collection order, and so now contains n+1 sorted elements.
Consider that last element. We already know about n smaller Hardy numbers, and so there is no possible way this last element is our answer. Why keep it? At this point we can shrink sums again down to size n and toss the largest element out. This is both a space savings, and time savings as we have fewer elements to maintain in sorted order.
The natural data structure for sums in that approach is a max heap. In java there is no native implementation available, but a few 3rd party ones are floating around. You could "make it work" with TreeMap::lastKey, which will be slower in the end, but still faster than quadratic bubbleSort.
If you have an extremely large number of elements, and you effectively want an index to allow fast tests for containment in the underlying dataset, then take a look at Bloom Filters. These are space-efficient indexes whose sole purpose is to enable fast tests for containment in a dataset.
Bloom Filters are probabilistic, which means if they return true for containment, then you actually need to check your underlying dataset to confirm that the element is really present.
If they return false, the element is guaranteed not to be contained in the underlying dataset, and in that case the test for containment would be very cheap.
So it depends on the whether most of the time you expect a candidate to really be contained in the dataset or not.
this is core function to find if a given number is HR-number: it's in C but one should get the idea:
bool is_sum_of_cubes(int value)
{
int m = pow(value, 1.0/3);
int i = m;
int j = 1;
while(j < m && i >= 0)
{
int element = i*i*i + j*j*j;
if( value == element )
{
return true;
}
if(element < value)
{
++j;
}
else
{
--i;
}
}
return false;
}
I'm programming in Java. Every 100 ms my program gets a new number.
It has a cache with contains the history of the last n = 180 numbers.
When I get a new number x I want to calculate how many numbers there are in the cache which are smaller than x.
Afterwards I want to delete the oldest number in the cache.
Every 100 ms I want to repeat the process of calculating how many smaller numbers there are and delete the oldest number.
Which algorithm should I use? I would like to optimize for making the calculation fast as it's not the only thing that calculated on those 100 ms.
For practical reasons and reasonable values of n you're best of with a ring-buffer of primitive ints (to keep track of oldest entry), and a linear scan for determining how many values are smaller than x.
In order for this to be in O(log n) you would have to use something like Guavas TreeMultiset. Here is an outline of how it would look.
class Statistics {
private final static int N = 180;
Queue<Integer> queue = new LinkedList<Integer>();
SortedMap<Integer, Integer> counts = new TreeMap<Integer, Integer>();
public int insertAndGetSmallerCount(int x) {
queue.add(x); // O(1)
counts.put(x, getCount(x) + 1); // O(log N)
int lessCount = 0; // O(N), unfortunately
for (int i : counts.headMap(x).values()) // use Guavas TreeMultiset
lessCount += i; // for O(log n)
if (queue.size() > N) { // O(1)
int oldest = queue.remove(); // O(1)
int newCount = getCount(oldest) - 1; // O(log N)
if (newCount == 0)
counts.remove(oldest); // O(log N)
else
counts.put(oldest, newCount); // O(log N)
}
return lessCount;
}
private int getCount(int x) {
return counts.containsKey(x) ? counts.get(x) : 0;
}
}
On my 1.8 GHz laptop, this solution performs 1,000,000 iterations on about 13 seconds (i.e. one iteration takes about 0.013 ms, well under 100 ms).
You can keep an array of 180 numbers and save an index to the oldest one so that when a new number comes in you overwrite the number at the oldest index and increment the index modulo 180 (it's a bit more complex than that since you need special behaviour for the first 180 numbers).
As for calculating how many numbers are smaller I would use the brute force way (iterate all the numbers and count).
Edit: I find it funny to see that the "optimized" version runs five times slower than this trivial implementation (thanks to #Eiko for the analysis). I think this is due to the fact that when you use trees and maps you lose data locality and have many more memory faults (not to mention memory allocation and garbage collection).
Add your numbers to a list. If size > 180, remove the first number.
Counting is just iterating over the 180 elements which is probably fast enough. It's hard to beat performance wise.
You can use a LinkedList implementation.
With this structure, you can easily manipulate the first and the last elements of the List.
(addFirst, removeFirst, ...)
For the algorithm (find how many numbers are lower/greater), a simple loop on the list is enough, and will give you the result in less than 100ms on a 180's element list.
You could try a custom linked list data structure where each node maintains next/prev as well as sorted next/prev references. Then inserting becomes a two phase process, first always insert node at tail, and the insert sort, and the insert sort will return the count of numbers less than x. Deleting is simply removing the head.
Here is an example, NOTE: THIS IS VERY NASTY JAVA, IT IS EXAMPLE CODE TO PURELY DEMONSTRATE THE IDEA. You get the idea! ;) Also, I'm only adding a few items, but it should give you an idea of how it would work... The worst case for this is a full iteration through the sorted linked list - which is no worse than the examples above I guess?
import java.util.*;
class SortedLinkedList {
public static class SortedLL<T>
{
public class SortedNode<T>
{
public SortedNode(T value)
{
_value = value;
}
T _value;
SortedNode<T> prev;
SortedNode<T> next;
SortedNode<T> sortedPrev;
SortedNode<T> sortedNext;
}
public SortedLL(Comparator comp)
{
_comp = comp;
_head = new SortedNode<T>(null);
_tail = new SortedNode<T>(null);
// Setup the pointers
_head.next = _tail;
_tail.prev = _head;
_head.sortedNext = _tail;
_tail.sortedPrev = _head;
_sortedHead = _head;
_sortedTail = _tail;
}
int insert(T value)
{
SortedNode<T> nn = new SortedNode<T>(value);
// always add node at end
nn.prev = _tail.prev;
nn.prev.next = nn;
nn.next = _tail;
_tail.prev = nn;
// now second insert sort through..
int count = 0;
SortedNode<T> ptr = _sortedHead.sortedNext;
while(ptr.sortedNext != null)
{
if (_comp.compare(ptr._value, nn._value) >= 0)
{
break;
}
++count;
ptr = ptr.sortedNext;
}
// update the sorted pointers..
nn.sortedNext = ptr;
nn.sortedPrev = ptr.sortedPrev;
if (nn.sortedPrev != null)
nn.sortedPrev.sortedNext = nn;
ptr.sortedPrev = nn;
return count;
}
void trim()
{
// Remove from the head...
if (_head.next != _tail)
{
// trim.
SortedNode<T> tmp = _head.next;
_head.next = tmp.next;
_head.next.prev = _head;
// Now updated the sorted list
if (tmp.sortedPrev != null)
{
tmp.sortedPrev.sortedNext = tmp.sortedNext;
}
if (tmp.sortedNext != null)
{
tmp.sortedNext.sortedPrev = tmp.sortedPrev;
}
}
}
void printList()
{
SortedNode<T> ptr = _head.next;
while (ptr != _tail)
{
System.out.println("node: v: " + ptr._value);
ptr = ptr.next;
}
}
void printSorted()
{
SortedNode<T> ptr = _sortedHead.sortedNext;
while (ptr != _sortedTail)
{
System.out.println("sorted: v: " + ptr._value);
ptr = ptr.sortedNext;
}
}
Comparator _comp;
SortedNode<T> _head;
SortedNode<T> _tail;
SortedNode<T> _sortedHead;
SortedNode<T> _sortedTail;
}
public static class IntComparator implements Comparator
{
public int compare(Object v1, Object v2){
Integer iv1 = (Integer)v1;
Integer iv2 = (Integer)v2;
return iv1.compareTo(iv2);
}
}
public static void main(String[] args){
SortedLL<Integer> ll = new SortedLL<Integer>(new IntComparator());
System.out.println("inserting: " + ll.insert(1));
System.out.println("inserting: " + ll.insert(3));
System.out.println("inserting: " + ll.insert(2));
System.out.println("inserting: " + ll.insert(5));
System.out.println("inserting: " + ll.insert(4));
ll.printList();
ll.printSorted();
System.out.println("inserting new value");
System.out.println("inserting: " + ll.insert(3));
ll.trim();
ll.printList();
ll.printSorted();
}
}
Let the cache be a list, so you can insert at the start and let the oldest be at the end and be removed.
Then after every insertion just scan the whole list and calculate the number you need.
Take a look at the commons-math implementation of the DescriptiveStatistics class (Percentile.java)
180 values is not many and a simple array which a brute force search and System.arraycopy() should be faster than 1 micro-second (1/1000 milli-second) and incurs no GC. It could be faster that playing with more complex collections.
I suggest you keep it simple and measure how long ti takes before assuming you need to optimise it.