Adding to the front of an ArrayDeque in Java - java

I am tasked with creating a method that adds to the front (left) of an ArrayDeque without using the Deque library. I have come up with a method though its not adding to the que, it is coming out with an empty que.
Here is my addLeft method:
public T[] addLeft(T item){
T[] copyarr = (T[]) new Object[arr.length+1];
if(isEmpty()){
copyarr[frontPos] = item;
}else{
copyarr[frontPos] = item;
frontPos--;
for(int i = 1; i<copyarr.length; i++){
copyarr[i] = arr[i];
}
}
arr = copyarr;
return arr;
}
Here is the test code iv been using:
public class DequeueTest {
public static void main(String[] args) {
Dequeue test = new Dequeue();
test.addLeft(3);
test.addLeft(4);
System.out.println(test.toString());
}
}
Any idea where i have gone wrong ?

Can you clarify "ArrayDeque without using the Deque library"? Does it mean you do not want to use the ArrayDeque from the JDK?
You've gone wrong at the array copy statement, which should be
copyarr[i] = arr[i-1]
(note how the index is shifted)
Your implementation has serious runtime costs for adding elements in front, as you are copying the array every time.
(From the computer science class: ArrayLists gain their runtime behaviour from doubling the array size, when needed, thus balancing the re-sizing costs over all append operations.)
Also, as already mentioned in the comments, have a look at the ArrayDeque implementation for inspiration. Maybe offerFirst is really just what you need.
Addition based on the comment by Ferrybig:
You may want to track the array size in an extra variable, so that the storage array can be larger than the actual deque. This way you can double the storage array size when it is too small, sparing you from creating a copy every time.
Still, you have to move the elements (from higher to lower indexes) and finally put in the new element on the first place.
Second optimization step: save the first position and, if you expect multiple inserts at front, reserve some space, so that you don't have to move the elements on every insertion.
(Again, you are trading spacial for runtime compexity.)

Related

Why does creating a copy of an object still alter instance variables of original object?

I have two classes. Class Algorithm implements a method findIntersections() which is a sweep line algorithm to check for intersections in O(nLogN) time.
It also implements a function addSegments which adds objects of type Segment (two points) to a priority Queue based on the x coordinate.
public class Algorithm {
public PriorityQueue pQueue = new PriortiyQueue();
//This function adds objects of type Segments to a priority queue
public void addSegments(List<Segment> segments) {
pQueue.add(segments);
//do something
}
//This function implements a sweep line algorithm to check for Intersections.
public void findIntersection() {
while (!pQueue.isEmpty()) {
p.poll(); //removes element from Queue
// do something
}
}
}
The other class Model loads data from a CSV file into the priority Queue. This is an intensive process which I only want to do once.
On the other hand, checkForCollissions is called millions of times.
I want to check for collisions between the supplied segment and the rest of the segments added in the priority queue from the csv file
I do not want to be adding elements to the priority queue from scratch each time. This would not be feasible.
public class Model {
public Algorithm algoObj = new Algorithm();
public ArrayList<Segment> algoObj = new ArrayList<>();
public ArrayList<Segment> segments = new ArrayList<>();
public ArrayList<Segment> single_segment = new ArrayList<>();
public boolean loadCSV() {
//read csv file
while ((strLine = br.readLine()) != null) {
segments.add(new Segment()); //Add all segments in CSV file to ArrayLisyt
algo.addSegments(segments); //Adds 4000 objects of type segment to priority Queue
}
}
//This function is called millions of times
public boolean checkForCollisions(segment_to_check) {
single_segment.add(segment_to_check); //Add 1 segment.
algoObj.addSegments(single_segment); //Adds 1 object of type segment to priority Queue
algoObj.findIntersection();
single_segment.remove(new Segment()); //Remove above segment to get back to original data
}
}
TL;DR
The problem I am having is that after the first call of checkForCollisions the priority queue has changed since findIntersection() works by polling elements from the queue, thus altering the queue.
How do I keep the priority queue created by algoObj.addSegments() from changing between function calls?
Does this have to do witch shallow and deep copying as explained here?
I tried creating a copy of the queue at the beginning of the function and then altering the copy:
public boolean checkForCollisions(segment_to_check) {
Algorithm copy = algoObj;
single_segment.add(segment_to_check); //Add 1 segment.
copy.addSegments(single_segment); //Adds 1 object of type segment to priority Queue
copy.findIntersection();
single_segment.remove(new Segment()); //Remove above segment to get back to original data
}
}
This however does not work as it still alters the priority queue of the original algoObj.
I believe this is a beginner's question and stems from my lack of proper understanding when working with OO languages. Any help would be appreciated.
First of all, it is crucial to know that assigning an existing object to another variable does not create a copy of the original object:
MyObject a = new MyObject();
MyObject b = a; // does NOT create a copy!
// now a and b "point" to the same single instance of MyObject!
Some thoughts about your actual problem:
Your priority queue is just a working data structure that is used for the intersection algorithm, and just while the algorithm is running. When its done (so the intersection(s) have been found), it is empty or at least altered, as you already wrote. So the priority queue must be recreated for every algorithm run.
So what you should do:
Load the segments from the CSV file into your ArrayList, but don't pass it to the priority queue yet.
Refill (or recreate) the priority queue every time before calling findIntersection(). This is best be done by passing all segments to the method and creating a new prioerity queue from scratch:
public void findIntersection(Collection<Segment> segments) {
PriorityQueue<Segment> pQueue = new PrioerityQueue<Segment>(segments);
while (!pQueue.isEmpty()) {
p.poll(); //removes element from Queue
// do something
}
}
Hint: As I've already wrote at the beginning, this does not copy the individual segments nor the segment collection. It just passes a references. Of course, the priority queue will have to create internal structures at construction time, so if the segments collection is huge, this may take some time.
If this solution is too slow for your needs, you will have to work on your algorithms. Do you really need to check for intersections so often? If you add just one segment to the list, it should be sufficient to check intersections with the other segments, but not if the other segments intersect each other. Probably you could store your segments in a binary search tree similar to the one used by the Bentley–Ottmann algorithm. Whenever a new segment "arrives", it can be checked against the search tree, which should be doable with a time complexity of about O(log n). After that, the segment can be inserted into the tree, if necessary.
Or maybe you can add all segments first and then check for intersections just once.

ArrayLists better practice

I have written 2 methods in Java. Second method looks cleaner to me because I come from python background, but I think it will be slower than first because indexOf() also does the iteration? Is there a way to use for in loop correctly in situation like this? Also, if there is better way to do it (without Streams), how can it be done?
private ArrayList<MyObject> myObjects;
First method:
private int findObject(String objectName) {
for(int i=0; i<this.myObjects.size(); i++) {
MyObject myObject = this.myObjects.get(i);
if(myObject.getName().equals(objectName)) return i;
}
return -1;
}
Second method:
private int findObject(String objectName) {
for(MyObject myObject: this.myObjects) {
if(myObject.getName().equals(objectName)) return this.myObjects.indexOf(myObject);
}
return -1;
}
I think it will be slower than first because indexOf() also does the iteration?
You are correct.
Is there a way to use for each loop correctly in situation like this?
You can use a for each AND an index variable.
private int findObject(String objectName) {
int i = 0;
for (MyObject myObject: this.myObjects) {
if (myObject.getName().equals(objectName)) return i;
i++;
}
return -1;
}
This would be a good solution if myObjects.get(i) is an expensive operation (e.g. on a LinkedList where get(n) is O(N)) or if it is not implementable (e.g. if you were iterating a Stream).
You could also use a ListIterator provided that myObjects has a method that returns a ListIterator; see #Andy Turner's answer for an example. (It won't work for a typical Set or Map class.)
The first version is perfect if you know you're working with an ArrayList (or some other array-based List, e.g. Vector).
If myObject happens to be a LinkedList or similar, your performance will degrade with longer lists, as then get(i) no longer executes in constant time.
Your second approach will handle LinkedLists as well as ArrayLists, but it iterates twice over your list, once in your for loop, and once in the indexOf() call.
I'd recommend a third version: use the for loop from the second approach, and add an integer counting variable, incrementing inside the loop. This way, you get the best of both: iterating without performance degradation, and cheap position-counting.
The better way of doing this (that avoids you having to maintain a separate index variable; and works for non-RandomAccess lists too) would be to use a ListIterator:
for (ListIterator<MyObject> it = myObjects.listIterator(); it.hasNext();) {
MyObject myObject = it.next();
if(myObject.getName().equals(objectName)) return it.prevIndex();
}
return -1;

Stack that you can move nth element to the top of the stack

I'm looking for a Stack data structure that also allows moving the nth element to the top of the stack. So in addition to pop(), push(), peek() I want something like moveToTop(int n) where the top of the stack n=0 and the bottom of the stack n=size-1.
What would be the best way to implement that? I'm working in Java.
there is no the moveToTop methed in standard stack data structure ,but if you want to do,i think you can implement like follows:
public class MyStack<T> extends Stack<T>{
public synchronized void moveToTop(int n) throws Exception {
int size = this.size();
if(n>size) {
throw new Exception("error position");
}
T ele = remove(n);
push(ele);
}
}
Why reinvent? Simple is Stack class
https://docs.oracle.com/javase/8/docs/api/java/util/Stack.html
It inherits remove(int index) from Vector.
You can create another array or other underlying structure to hold items temporarily. Move the items in front of item n into that, push that items back and push the item you want to top of that.
public class MyStack<T> {
T items[size];
....
void moveToTop(int n)
{
T obj = items[n];
T[] tempItems = new T[n - 1];
Arrays.copyOf(items, n - 1);
System.arraycopy(items, 0, tempItems, 0, n - 1);
System.arraycopy(tempItems, 0, items, 1, n - 1);
items[0] = obj;
}
....
}
Removed other parts for brevity.
Instead of doing all this hassle, you can check Double-Linked list or Skip List.
java.util.LinkedList fulfills your requirement, because it's linked list / stack / queue at the same time.
Relevant methods:
public E remove(int index)
Remove by index, and get the removed value.
public void push(E e)
Push to top.
So, what you need to do is:
Figure out the index.
Remove it by index, and get the return value.
Push the return value to top.
There are couple ways to implement stack that I know:
- Implement it using extension of Vector
- Implement it using any List component
- Implement it as a Linked Data Structure
- Implement it using underlying Java Array
I suggest you to implement it with using underlying Java Array. It's hard to implement comparing to others but it will be really helpful to understand the logic.

Using synchronizedList with for loop and adding items inside it

I'm using
Collections.synchronizedList(new ArrayList<T>())
part of the code is:
list = Collections.synchronizedList(new ArrayList<T>());
public void add(T arg) {
int i;
synchronized (list) {
for (i = 0; i < list.size(); i++) {
T arg2 = list.get(i);
if (arg2.compareTo(arg) < 0) {
list.add(i, arg);
break;
}
}
Is it right that for loop is actually using iterator and therefore I must wrap the for with synchronized?
Is it thread-safe to use synchronized and make addition inside it like I did here?
I'm sorry if these questions are very basic, I'm new to the subject and didn't find answers on the internet.
Thank you!!
Is it right that for loop is actually using iterator and therefore I must wrap the for with synchronized?
There are two parts to your question.
Firstly, no, you're not using an iterator here, this is a basic for loop.
The enhanced for loop is the for loop which uses an iterator:
for (T element : list) { ... }
You can see in the language spec how this uses the iterator - search for where it says "The enhanced for statement is equivalent to a basic for statement of the form".
Secondly, even though you're not using an iterator, you do need synchronized. The two are orthogonal.
You are doing multiple operations (the size, the get and the add), with dependencies between them. You need to make sure that no other thread interferes with your logic:
the get depends on the size, since you don't want to try to get an element with index >= size, for instance;
the add depends on the get, since you're apparently trying to ensure the list elements are ordered. If another thread could sneak in and change the element after you get it, you might insert the new element in the wrong place.
You correctly avoid this potential interference this through synchronization over list, and creating the synchronizedList in such a way that nothing other than the synchronizedList can get direct access to the underlying list.
If your arg2.compareTo(arg) never return 0 (zero) you can use TreeSet. Will be much more simple:
set = Collections.synchronizedSet(new TreeSet<T>());
public void add(T arg) {
set.add(arg);
}
If you need hold same items (compareTo returns 0) then use the list:
list = new ArrayList<T>();
public void add(T arg) {
synchronized (list) {
int index = Collections.binarySearch(list, arg);
list.add(index, arg);
}
}
First and second cases complexity will be log(N) (10 for 1000 items). Your code complexity is N (1000 for 1000 items).

How to optimize my Trie implementation so that I don't get OutOfMemoryError

I'm implementing text predictions using a very simple Trie implementation, which is a slightly modified version of this code
It performs better than I initially expected, but I'm receiving an OutOfMemoryError frequently. Any ideas how can solve this problem by either:
increasing the memory designated to my app
optimizing the implementation to use less memory
or any other suggestions?
I've seen recommendations that the memory limitation problems could be avoided by using a native implementation of a part of the code, but I would prefer to stay in Java, if possible.
You could try turning on largeHeap in your manifest to see if it helps:
http://developer.android.com/guide/topics/manifest/application-element.html#largeHeap
By doing this.next = new Node[R]; the implementation allocates an array with 26 pointers to nodes on level 1, then 26^26 pointers to nodes on level 2, then 26^26^26 on level 3 and so on. That could be one reason you run out of memory.
You can try and change the implementation so that every Node has a HashMap of nodes with a small initial capacity, say 5. The HashMap will grow only when there's a real need - which will save some memory.
Another problem in that code is with the delete:
// delete a node
public void delete(Node node) {
for(int i = 0; i < R; i++) {
if(node.next != null) {
delete(node.next[i]);
}
}
node = null; // <-- this is not doing anything!
}
The reason it's not doing anything is that the reference to the node is passed by value in Java - so the real reference remains intact. What you should do instead is:
// delete a node
public void delete(Node node) {
for(int i = 0; i < R; i++) {
if(node.next != null) {
delete(node.next[i]);
node.next[i] = null; // <-- here you nullify the actual array item
} // which makes the object a good candidate for
// the next time GC will run
}
}
So it could also be a memory leak - in case you counted on delete to free space.

Categories

Resources