I want to write a program to implement an array-based stack, which accept integer numbers entered by the user.the program will then identify any occurrences of a given value from user and remove the repeated values from the stack,(using Java programming language).
I just need your help of writing (removing values method)
e.g.
input:6 2 3 4 3 8
output:6 2 4 8
Consider Collection.contains (possibly in conjunction with Arrays.asList, if you are so unfortunate), HashMap, or Set.
It really depends on what you have, where you are really going, and what silly restrictions the homework/teacher mandates. Since you say "implement an array-based stack" I am assuming there are some silly mandates in which case I would consider writing a custom arrayContains helper* method and/or using a secondary data-structure (Hash/Set) to keep track of 'seen'.
If you do the check upon insertion it's just (meta code, it's your home work :-):
function addItem (i) begin
if not contains(stack, i) then
push(stack, i)
end if
end
*You could use the above asList/contains if you don't mind being "not very efficient", but Java comes with very little nice support for Arrays and thus the recommendation for the helper which is in turn just a loop over the array returning true if the value was found, false otherwise. (Or, perhaps return the index found or -1... your code :-)
Assuming that the "no-duplicates" logic is a part of the stack itself, I would do the following:
1) Implement a helper method:
private boolean remove(int item)
This method should scan the array, and if it finds the item it should shrink the array by moving all subsequent items one position backwards. The returned value indicates whether a removal took place.
2) Now it is easy to implement the push method:
public void push(int item) {
if (!remove(item)) {
arr[topPos++] = item;
}
}
Note that my solution assumes there is always enough space in the array. A proper implementation should take care of resizing the array when necessary.
The question is an interesting (or troubling) one in that it breaks the spirit of the stack to enforce such a constraint. A pure stack can only be queried about its top element.
As a result, doing this operation necessarily requires treating the stack not as a stack but as some other data structure, or at least transferring all of the data in the stack to a different, intermediate data structure.
If you want to accomplish this from within the stack class itself, others' replies will prove useful.
If you want to accomplish this from outside of the stack, using only the traditional methods of a stack interface (push() and pop()), your algorithm might look something like this:
Create a Set of Integers to keep track of values encountered so far.
Create a second stack to hold the values temporarily.
While the stack isn't empty,
Pop off the top element.
If the set doesn't contain that element yet, add it to the set and push it onto the second stack.
If the set does contain the element, that means you've already encountered it and this is a duplicate. So ignore it.
While the second stack isn't empty,
Pop off the top element
Push the element back onto the original stack.
There are various other ways to do this, but I believe all would require some auxiliary data structure that is not a stack.
override the push method and have it run through the stack to determine whether the value already exists. if so, return false otherwise true (if you want to use a boolean return value).
basically, this is in spirit of the answer posted by Mr. Schneider, but why shrink the array or modify the stack at all if you can just determine whether a new item is a duplicate or not? if it's a duplicate, don't add it and the array does not need to be modified. am i missing something?
Related
I have to optimize and algorithm and i noticed that we have a loop like this
while (!floorQueues.values().stream().allMatch(List::isEmpty))
It seems like on each iteration it checks if all of the lists in this map are empty.
The data in the map is taken from a two dimensional array like this
int currentFloorNumber = 0;
for (int[] que : queues) {
List<Integer> list = Arrays.stream(que).boxed().collect(Collectors.toList());
floorQueues.put(currentFloorNumber, list);
currentFloorNumber++;
}
I thought that it will be more optimal if i take the count of elements in the arrays when transforming the data and then check how many times i deleted from the lists as a condition to end the loop
while (countOfDeltedElements < totalCountOfElements)
but when i tested the code it runs slower that before. So i wonder how isEmpty works behind
the scenes to be faster than my solution.
This may depend on the implementation of the class that implements List.
An ArrayList simply checks if there are 0 elements:
/**
* Returns <tt>true</tt> if this list contains no elements.
*
* #return <tt>true</tt> if this list contains no elements
*/
public boolean isEmpty() {
return size == 0;
}
A distinct non-answer: Java performance, and Java benchmarking doesn't work this way.
You can't look at these 5 lines of source code to understand what exactly will happen at runtime. Or to be precise: you have to understand that streams are a very advanced, aka complex thing. Stream code might create plenty of objects at runtime, that are only used once, and then thrown away. In order to assess the true performance impacts, you really have to understand what that code is doing. And that could be: a lot!
The serious answer is: if you really need to understand what is going on, then you will need to:
study the stream implementation in great detail
and worse: you will need to look into the activities of the Just in Time compiler within the JVM (to see for example how that "source" code gets translated and optimised to machine code).
and most likely: you will need to apply a real profiler, and to plenty of experiments.
In any case, you should start reading here to ensure that the numbers you are measuring make sense in the first place.
Normally we have two ways to check if the list is empty or not either list.length() > 0 or !list.isEmpty() .
when we use list.length() what happen at backend it will iterate/go through till the end of list and if the list have big numbers of elements and it will surely going to take long to reach the end .On the other hand, if we use 'list.isEmpty()' then it will check only first element of the list if its their or not (O(1)) and return true/false just on this first index which is obviously fast.
From performance prespective , we should alwayz use isEmpty() as best practice
I have had this question for a while but I have been unsatisfied with the answers because the distinctions appear to be arbitrary and more like conventional wisdom that is sort of blindly accepted rather than assessed critically.
In an ArrayList it is said that insertion cost (for a single element) is linear. If we are inserting at index p for 0 <= p < n where n is the size of the list, then the remaining n-p elements are shifted over first before the new element is copied into position p.
In a LinkedList, it is said that insertion cost (for a single element) is constant. For instance if we already have a node and we want to insert after it, we rearrange some pointers and it's done quickly. But getting this node in the first place, I don't see how it can be done other than a linear search first (assuming it isn't a trivial case like prepending at the start of the list or appending at the end).
And yet in the case of the LinkedList, we don't count that initial search time. To me this is confusing because it's sort of like saying "The ice cream is free... after you pay for it." It's like, well, of course it is... but that sort of skips the hard part of paying for it. Of course inserting in a LinkedList is going to be constant time if you already have the node you want, but getting that node in the first place may take some extra time! I could easily say that inserting in an ArrayList is constant time... after I move the remaining n-p elements.
So I don't understand why this distinction is made for one but not the other. You could argue that insertion is considered constant for LinkedLists because of the cases where you insert at the front or back where linear time operations are not required, whereas in an ArrayList, insertion requires copying of the suffix array after position p, but I could easily counter that by saying if we insert at the back of an ArrayList, it is amortized constant time and doesn't require extra copying in most cases unless we reach capacity.
In other words we separate the linear stuff from the constant stuff for LinkedList, but we don't separate them for the ArrayList, even though in both cases, the linear operations may not be invoked or not invoked.
So why do we consider them separate for LinkedList and not for ArrayList? Or are they only being defined here in the context where LinkedList is overwhelmingly used for head/tail appends and prepends as opposed to elements in the middle?
This is basically a limitation of the Java interface for List and LinkedList, rather than a fundamental limitation of linked lists. That is, in Java there is no convenient concept of "a pointer to a list node".
Every type of list has a few different concepts loosely associated with the idea of pointing to a particular item:
The idea of a "reference" to a specific item in a list
The integer position of an item in the list
The value of a item that may be in the list (possibly multiple times)
The most general concept is the first one, and is usually encapsulated in the idea of an iterator. As it happens, the simple way to implement an iterator for an array backed list is simply to wrap an integer which refers to the position of the item in a list. So for array lists only, the first and second ways of referring to items are pretty tightly bound.
For other list types, however, and even for most other container types (trees, hashes, etc) that is not the case. The generic reference to an item is usually something like a pointer to the wrapper structure around one item (e.g., HashMap.Entry or LinkedList.Entry). For these structures the idea of accessing the nth element isn't necessary natural or even possible (e.g., unordered collections like sets and many hash maps).
Perhaps unfortunately, Java made the idea of getting an item by its index a first-class operation. Many of the operations directly on List objects are implemented in terms of list indexes: remove(int index), add(int index, ...), get(int index), etc. So it's kind of natural to think of those operations as being the fundamental ones.
For LinkedList though it's more fundamental to use a pointer to a node to refer to an object. Rather than passing around a list index, you'd pass around the pointer. After inserting an element, you'd get a pointer to the element.
In C++ this concept is embodied in the concept of the iterator, which is the first class way to refer to items in collections, including lists. So does such a "pointer" exist in Java? It sure does - it's the Iterator object! Usually you think of an Iterator as being for iteration, but you can also think of it as pointing to a particular object.
So the key observation is: given an pointer (iterator) to an object, you can remove and add from linked lists in constant time, but from an array-like list this takes linear time in general. There is no inherent need to search for an object before deleting it: there are plenty of scenarios where you can maintain or take as input such a reference, or where you are processing the entire list, and here the constant time deletion of linked lists does change the algorithmic complexity.
Of course, if you need to do something like delete the first entry containing the value "foo" that implies both a search and a delete operation. Both array-based and linked lists taken O(n) for search, so they don't vary here - but you can meaningfully separate the search and delete operations.
So you could, in principle, pass around Iterator objects rather than list indexes or object values - at least if your use case supports it. However, at the top I said that "Java has no convenient notion of a pointer to a list node". Why?
Well because actually using Iterator is actually very inconvenient. First of all, it's tough to get an Iterator to an object in the first place: for example, and unlike C++, the add() methods don't return an Iterator - so to get a pointer to the item you just added, you need to go ahead and iterate over the list or use the listIterator(int index) call, which is inherently inefficient for linked lists. Many methods (e.g., subList()) support only a version that takes indexes, but not Iterators - even when such a method could be efficiently supported.
Add to that the restrictions around iterator invalidation when the list is modified, and they actually become pretty useless for referring to elements except in immutable lists.
So Java's support of pointers to list elements is pretty half-hearted an so it's tough to leverage the constant time operations that linked list offers, except in cases such as adding to the front of a list, or deleting items during iteration.
It's not limited to lists, either - the ConcurrentQueue is also a linked structure which supports constant time deletes, but you can't reliably use that ability from Java.
If you're using a LinkedList, chances are you're not going to use it for a random access insert. LinkedList offers constant time for push (insert at the beginning) or add (because it has a ref to the final element IIRC). You are correct in your suspicion that an insert into a random index (e.g. insert sorted) will take linear time - not constant.
ArrayList, by contrast, is worst case linear. Most of the time it simply does an arraycopy to shift the indices (which is a low-level shift that is constant time). Only when you need to resize the backing array will it take linear time.
Similarly, to find the no of elements in a stack, is stack.size() call faster than popping each element and counting? Of course I don't need the stack anymore.
Stack inherits from Vector, and Vector is the class that defines the size() method, not Stack. Vector also has a protected field called elementCount, which is the number of valid elements in the Vector. I would assume that the size() method just returns this variable, making it much faster to call size() than to pop and count. Also, Vector has no need to do pops to count elements because popping is not a feature of Vector.
The answer depends on the implementation of the Stack class, but logically popping n times cannot be faster than obtaining count and calling clear: it is an O(n) algorithm.
Obtaining count, on the other hand, could be faster, because it can be done in a single access if Stack stores the number of items that it has, making it an O(1)* algorithm. Similarly, clearing the whole content in one go can be implemented as an O(1).
* O(1) is a fancy way of saying "does not depend on the number of elements on the stack".
You can write two functions and test your problem with those functions, you'll learn and figure out your question. Write one function popping each element, pushing it in another stack, and when finished push it all back in the original array (you want to keep the integrity of your stack when calling .size(), the size call should not modify your stack). The other with stack.size(). Then call each function in your main function and use a function like time() before and after each of the two function calls. You'll see a difference in time between the two functions. Just try it out!
Of course this depends on the implementation of the stack as mentioned in other answers. Just read the documentation of the stack interface and it should be in there.
In the following piece of code:
if (map.containsKey(key)) {
map.remove(key);
}
Looking at performance, is it useful to first do a Map.containsKey() check before trying to remove the value from the map?
Same question goes for retrieving values, is it useful to first do the contains check if you know that the map contains no null values?
if (map.containsKey(key)) {
Object value = map.get(key);
}
remove returns null if there's no mapping for key no exception will be thrown:
public V remove(Object key)
I don't see any reason to perform that if before trying to remove a key, perhaps maybe if you want to count how many items where removed from the map..
In the second example, you'll get null if the key doesn't exist. Whether to check or not, depends on your logic.
Try not to waste your time on thinking about performance, containsKey has O(1) time complexity:
This implementation provides constant-time performance for the basic operations (get and put)
is it useful to first do a Map.containsKey() check before trying to remove the value from the map?
No, it is counterproductive:
In the case when the item is not there, you would see no difference
In the case when the item is there, you would end up with two look-ups.
If you want to remove the item unconditionally, simply call map.remove(key).
Same question goes for retrieving values
Same logic applies here. Of course you need to check the result for null, so in this case if stays there.
Note that this cleanup exercise is about readability first, and only then about performance. Accessing a map is a fast operation, so accessing it twice is unlikely to cause major performance issues except for some rather extreme cases. However, removing an extra conditional will make your code more readable, which is very important.
The Java documentation on remove() states that it will remove the element only if the map contains such element. So the contains() check before remove() is redundant.
This is subjective (and entirely a case of style), but for the case where you're retrieving a value, I prefer the contains(key) call to the null check. Boolean comparisons just feel better than null comparisons. I'd probably feel differently if Map<K,V>.get(key) returned Optional<V>.
Also, it's worth noting the "given no null keys" assertion is one that can be fairly hard to prove, depending on the type of the Map (which you might not even know). In general I think the redundant check on retrieval is (or maybe just feels) safer, just in case there's a mistake somewhere else (knocks on wood, checks for black cats, and avoids a ladder on the way out).
For the removal operation you're spot on. The check is useless.
What is the Difference Between addItem and inserItemAt method in java?
One thing that i have noticed while making a program is that addItem method starts putting
entries in the last in JComboBox.
insertItemAt method pins the entry at specific position.
*Is That The Only Difference? *
It depends on the implementation of the underlying datamodel but as for the semantics, yes that would be the only difference. Here are some differences for insertItemAt:
- might throw an IndexOutOfBoundsException if the specified index is invalid
- does not select an item whereas addItem selects the inserted item if it is the only one in the list
Different implementation might do things differently and have different performance, e.g. a linked list might be faster for insertItemAt than an array based list.
Is That The Only Difference?
From the standpoint of how it affects the underlying Collection, yes.
Both insert items, the only diference is: the first insert the item in the end like a stack, and the second one inserts an item in an indicated position, obviously moving the items according to.
So basically, yes, that's the only differece