Class Order
{
String name;
Order(String n)
{ name = n; }
//setter and getters of name
}
Order a = new Order("same");
Order b = new Order("same");
Order c = new Order("diff");
List<Order> nameList// a,b,c
I want to
seperate list of Orders
List<Order> dupList// a,b
List<Order> nondupList// c
Now I want to check whether same name is available in multiple orders of "nameList".
I achieved that using index of List and compare with other than that index List Orders.
But is there any other better way to achieve this.
Probably one other way could be - Override hashCode method and equals method. Generate hasCode on calculation of string name.
public class Order {
String name;
public Order(String n) {
name = n;
}
// setter and getters of name
#Override
public int hashCode() {
int h = 0;
int len = name.length();
for (int i = 0; i < len; i++)
h = 31 * h + name.charAt(i);
return h;
}
#Override
public boolean equals(Object obj) {
if(obj == null)
return false;
else if(this.hashCode() == obj.hashCode())
return true;
return false;
}
}
...
List<Order> nameList = ...;// a,b,c
Set<Order> nonDuplicate= new HashSet<Order>(nameList);
If you want to use pure java, add the elements to the a List, and sort it with the appropriate comparator. Then iterate over the list, keeping track of the previous element, doing a control break; in other words, if the element is the same as the previous both are a duplicate. If they are not (or it is the first), they are a candidate and you need to wait for the next check to find a duplicate.
If you don't want to sort, you can add the elements to a Set as they appear; if before adding an element it is already in the set, you can add it to the duplicate set. You can do the check on both sets removing as you go, or remove from the complete set the duplicates at the end. You can use any collection, but Set is more efficient since it has a fast contains method.
If you can use libraries, you can just use Guava and add everything to a multiset (http://google-collections.googlecode.com/svn/trunk/javadoc/com/google/common/collect/Multiset.html ) Then iterate over the multiset and you have the count per element.
You could use Map>, get list for given name, if null, create it and put it in, add current order in that list.
Related
How could I go about detecting (returning true/false) whether an ArrayList contains more than one of the same element in Java?
Many thanks,
Terry
Edit
Forgot to mention that I am not looking to compare "Blocks" with each other but their integer values. Each "block" has an int and this is what makes them different.
I find the int of a particular Block by calling a method named "getNum" (e.g. table1[0][2].getNum();
Simplest: dump the whole collection into a Set (using the Set(Collection) constructor or Set.addAll), then see if the Set has the same size as the ArrayList.
List<Integer> list = ...;
Set<Integer> set = new HashSet<Integer>(list);
if(set.size() < list.size()){
/* There are duplicates */
}
Update: If I'm understanding your question correctly, you have a 2d array of Block, as in
Block table[][];
and you want to detect if any row of them has duplicates?
In that case, I could do the following, assuming that Block implements "equals" and "hashCode" correctly:
for (Block[] row : table) {
Set set = new HashSet<Block>();
for (Block cell : row) {
set.add(cell);
}
if (set.size() < 6) { //has duplicate
}
}
I'm not 100% sure of that for syntax, so it might be safer to write it as
for (int i = 0; i < 6; i++) {
Set set = new HashSet<Block>();
for (int j = 0; j < 6; j++)
set.add(table[i][j]);
...
Set.add returns a boolean false if the item being added is already in the set, so you could even short circuit and bale out on any add that returns false if all you want to know is whether there are any duplicates.
Improved code, using return value of Set#add instead of comparing the size of list and set.
public static <T> boolean hasDuplicate(Iterable<T> all) {
Set<T> set = new HashSet<T>();
// Set#add returns false if the set does not change, which
// indicates that a duplicate element has been added.
for (T each: all) if (!set.add(each)) return true;
return false;
}
With Java 8+ you can use Stream API:
boolean areAllDistinct(List<Block> blocksList) {
return blocksList.stream().map(Block::getNum).distinct().count() == blockList.size();
}
If you are looking to avoid having duplicates at all, then you should just cut out the middle process of detecting duplicates and use a Set.
Improved code to return the duplicate elements
Can find duplicates in a Collection
return the set of duplicates
Unique Elements can be obtained from the Set
public static <T> List getDuplicate(Collection<T> list) {
final List<T> duplicatedObjects = new ArrayList<T>();
Set<T> set = new HashSet<T>() {
#Override
public boolean add(T e) {
if (contains(e)) {
duplicatedObjects.add(e);
}
return super.add(e);
}
};
for (T t : list) {
set.add(t);
}
return duplicatedObjects;
}
public static <T> boolean hasDuplicate(Collection<T> list) {
if (getDuplicate(list).isEmpty())
return false;
return true;
}
I needed to do a similar operation for a Stream, but couldn't find a good example. Here's what I came up with.
public static <T> boolean areUnique(final Stream<T> stream) {
final Set<T> seen = new HashSet<>();
return stream.allMatch(seen::add);
}
This has the advantage of short-circuiting when duplicates are found early rather than having to process the whole stream and isn't much more complicated than just putting everything in a Set and checking the size. So this case would roughly be:
List<T> list = ...
boolean allDistinct = areUnique(list.stream());
If your elements are somehow Comparable (the fact that the order has any real meaning is indifferent -- it just needs to be consistent with your definition of equality), the fastest duplicate removal solution is going to sort the list ( 0(n log(n)) ) then to do a single pass and look for repeated elements (that is, equal elements that follow each other) (this is O(n)).
The overall complexity is going to be O(n log(n)), which is roughly the same as what you would get with a Set (n times long(n)), but with a much smaller constant. This is because the constant in sort/dedup results from the cost of comparing elements, whereas the cost from the set is most likely to result from a hash computation, plus one (possibly several) hash comparisons. If you are using a hash-based Set implementation, that is, because a Tree based is going to give you a O( n log²(n) ), which is even worse.
As I understand it, however, you do not need to remove duplicates, but merely test for their existence. So you should hand-code a merge or heap sort algorithm on your array, that simply exits returning true (i.e. "there is a dup") if your comparator returns 0, and otherwise completes the sort, and traverse the sorted array testing for repeats. In a merge or heap sort, indeed, when the sort is completed, you will have compared every duplicate pair unless both elements were already in their final positions (which is unlikely). Thus, a tweaked sort algorithm should yield a huge performance improvement (I would have to prove that, but I guess the tweaked algorithm should be in the O(log(n)) on uniformly random data)
If you want the set of duplicate values:
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class FindDuplicateInArrayList {
public static void main(String[] args) {
Set<String> uniqueSet = new HashSet<String>();
List<String> dupesList = new ArrayList<String>();
for (String a : args) {
if (uniqueSet.contains(a))
dupesList.add(a);
else
uniqueSet.add(a);
}
System.out.println(uniqueSet.size() + " distinct words: " + uniqueSet);
System.out.println(dupesList.size() + " dupesList words: " + dupesList);
}
}
And probably also think about trimming values or using lowercase ... depending on your case.
Simply put:
1) make sure all items are comparable
2) sort the array
2) iterate over the array and find duplicates
To know the Duplicates in a List use the following code:It will give you the set which contains duplicates.
public Set<?> findDuplicatesInList(List<?> beanList) {
System.out.println("findDuplicatesInList::"+beanList);
Set<Object> duplicateRowSet=null;
duplicateRowSet=new LinkedHashSet<Object>();
for(int i=0;i<beanList.size();i++){
Object superString=beanList.get(i);
System.out.println("findDuplicatesInList::superString::"+superString);
for(int j=0;j<beanList.size();j++){
if(i!=j){
Object subString=beanList.get(j);
System.out.println("findDuplicatesInList::subString::"+subString);
if(superString.equals(subString)){
duplicateRowSet.add(beanList.get(j));
}
}
}
}
System.out.println("findDuplicatesInList::duplicationSet::"+duplicateRowSet);
return duplicateRowSet;
}
best way to handle this issue is to use a HashSet :
ArrayList<String> listGroupCode = new ArrayList<>();
listGroupCode.add("A");
listGroupCode.add("A");
listGroupCode.add("B");
listGroupCode.add("C");
HashSet<String> set = new HashSet<>(listGroupCode);
ArrayList<String> result = new ArrayList<>(set);
Just print result arraylist and see the result without duplicates :)
This answer is wrriten in Kotlin, but can easily be translated to Java.
If your arraylist's size is within a fixed small range, then this is a great solution.
var duplicateDetected = false
if(arrList.size > 1){
for(i in 0 until arrList.size){
for(j in 0 until arrList.size){
if(i != j && arrList.get(i) == arrList.get(j)){
duplicateDetected = true
}
}
}
}
private boolean isDuplicate() {
for (int i = 0; i < arrayList.size(); i++) {
for (int j = i + 1; j < arrayList.size(); j++) {
if (arrayList.get(i).getName().trim().equalsIgnoreCase(arrayList.get(j).getName().trim())) {
return true;
}
}
}
return false;
}
String tempVal = null;
for (int i = 0; i < l.size(); i++) {
tempVal = l.get(i); //take the ith object out of list
while (l.contains(tempVal)) {
l.remove(tempVal); //remove all matching entries
}
l.add(tempVal); //at last add one entry
}
Note: this will have major performance hit though as items are removed from start of the list.
To address this, we have two options. 1) iterate in reverse order and remove elements. 2) Use LinkedList instead of ArrayList. Due to biased questions asked in interviews to remove duplicates from List without using any other collection, above example is the answer. In real world though, if I have to achieve this, I will put elements from List to Set, simple!
/**
* Method to detect presence of duplicates in a generic list.
* Depends on the equals method of the concrete type. make sure to override it as required.
*/
public static <T> boolean hasDuplicates(List<T> list){
int count = list.size();
T t1,t2;
for(int i=0;i<count;i++){
t1 = list.get(i);
for(int j=i+1;j<count;j++){
t2 = list.get(j);
if(t2.equals(t1)){
return true;
}
}
}
return false;
}
An example of a concrete class that has overridden equals() :
public class Reminder{
private long id;
private int hour;
private int minute;
public Reminder(long id, int hour, int minute){
this.id = id;
this.hour = hour;
this.minute = minute;
}
#Override
public boolean equals(Object other){
if(other == null) return false;
if(this.getClass() != other.getClass()) return false;
Reminder otherReminder = (Reminder) other;
if(this.hour != otherReminder.hour) return false;
if(this.minute != otherReminder.minute) return false;
return true;
}
}
ArrayList<String> withDuplicates = new ArrayList<>();
withDuplicates.add("1");
withDuplicates.add("2");
withDuplicates.add("1");
withDuplicates.add("3");
HashSet<String> set = new HashSet<>(withDuplicates);
ArrayList<String> withoutDupicates = new ArrayList<>(set);
ArrayList<String> duplicates = new ArrayList<String>();
Iterator<String> dupIter = withDuplicates.iterator();
while(dupIter.hasNext())
{
String dupWord = dupIter.next();
if(withDuplicates.contains(dupWord))
{
duplicates.add(dupWord);
}else{
withoutDupicates.add(dupWord);
}
}
System.out.println(duplicates);
System.out.println(withoutDupicates);
A simple solution for learners.
//Method to find the duplicates.
public static List<Integer> findDublicate(List<Integer> numList){
List<Integer> dupLst = new ArrayList<Integer>();
//Compare one number against all the other number except the self.
for(int i =0;i<numList.size();i++) {
for(int j=0 ; j<numList.size();j++) {
if(i!=j && numList.get(i)==numList.get(j)) {
boolean isNumExist = false;
//The below for loop is used for avoid the duplicate again in the result list
for(Integer aNum: dupLst) {
if(aNum==numList.get(i)) {
isNumExist = true;
break;
}
}
if(!isNumExist) {
dupLst.add(numList.get(i));
}
}
}
}
return dupLst;
}
I have list of ObjectLocation, declared as
List<ObjectLocation> myLocations;
And here's how ObjectLocation looks like:
public class ObjectLocation {
int locationID, ratingCount = 0;
}
Ok now myLocations holds thousands of locationID. If I have a particular locationID, how do I search the contents of myLocations for the locationID, and get the searched locationID's index (within myLocations) and it's ratingCount?
Well, you loop through all of the elements in the list, and if the locationID match, you've found your element!
int idx=0;
for (ObjectLocation ol:myLocations){
if (ol.locationID==searchedLocationID){
// found at index idx!!
}
idx++;
}
More efficiently, you could have a Map<Integer,ObjectLocation> where the key is the locationID of the ObjectLocation, to get much faster lookups.
For this sort of lookup I'd switch to using a Map<Integer, ObjectLocation> and store entries in the map like this:
Map<Integer, List<ObjectLocation>> myLocationMap = new HashMap<>();
List<ObjectLocation> currentList = myLocationMap.get(oneLocation.locationID);
if(currentList == null) {
// We haven't stored anything at this locationID yet,
// so create a new List and add it to the Map under
// this locationID value.
currentList = new ArrayList<>();
myLocationMap.put(oneLocation.locationID, currentList);
}
currentList.add(oneLocation);
Now you can quickly get all of the ObjectLocation entries with a specific value for locationID by grabbing them from the map like this:
List<ObjectLocation> listOfLocations = myLocationMap.get(someLocationId);
This assumes that multiple ObjectLocation instances can have the same locationID value. If not then you wouldn't need a List<ObjectLocation> in the map, just a single ObjectLocation.
For you to easily search and find a your ObjectLocation objects, you should first define .equals(Object o) method, in ObjectLocation class, that allows one ObjectLocation to be compared to another. After that, all you have to do is use .indexOf(Object o)' to get the index of the ObjectLocation you are looking for. Then extract that object and use its information as exemplified in the code below:
public class ObjectLocation {
int locationID, ratingCount = 0;
public boolean equals(Object o)
{
if(!(o instanceof ObjectLocation))
return false;
ObjectLocation another = (ObjectLocation)o;
if( locationID == another.locationID && ratingCount == another.ratingCount)
return true;
else
return false;
}
public static void main(String[] args)
{
List<ObjectLocation> myLocations;
ObjectLocation findThisLocation;
ObjectLocation found;
//Additional code here
int index = myLocations.indexOf(findThisLocation);
found = myLocations.get(index);
int id = found.locationID;
int rating = found.ratingCount;
}
}
List<ObjectLocation> myLocations = new ArrayList<>();
int index =0;
int particularId = 1;//!your Id
int locationid = 0;
int ratingcount = 0;
for(int i =0; i < myLocations.size(); i++) {
if(myLocations.get(i).locationID == particularId) {
index = i;
locationid = myLocations.get(i).locationID;
ratingcount = myLocations.get(i).ratingCount;
}
}
For Java 8, I would use (without changing anything on the data structure like using a Map instead or knowing about ordering in the list):
Optional<ObjectLocation> found = myLocations.stream()
.filter(location -> location.locationID == particularLocationID)
.findAny();
if (found.isPresent() {
int ratingCount = found.get();
…
}
When you need more performance for single searches, you may try parallelStream() instead of stream().
I have an array of objects (telephone directory entries, stored in the form Entry( surname,initials,extension) ) which I would like to search efficiently. In order to do this I'm trying to use Arrays.binarySearch(). I have two separate methods for searching the array, one using names and the other using numbers. The array is sorted by Surname in alphabetical order as I insert each element in the correct place in my addEntry() method. I can use binarySearch() when searching by name as the array is sorted in alphabetical order, but the problem I have is the array is not sorted when I search by number. I have overridden compareTo() in my Entry class for comparing surnames, but when I search by number I need to sort my array in ascending order of numbers, I am unsure how to do this?
public int lookupNumberByName(String surname, String initials) {
int index = 0;
if (countElements() == directory.length) {
Entry lookup = new Entry(surname, initials);
index = Arrays.binarySearch(directory, lookup);
}
else if (countElements() != directory.length) {
Entry[] origArray = directory;
Entry[] cutArray = Arrays
.copyOfRange(directory, 0, countElements());
directory = cutArray;
Entry lookup = new Entry(surname, initials);
index = Arrays.binarySearch(directory, lookup);
directory = origArray;
}
return index;
}
I would like to do something like this for my LookupByNumber() method -
public int LookupByNumber(int extension) {
Entry[] origArray1 = directory;
Entry[] cutArray1 = Arrays.copyOfRange(directory, 0, countElements());
directory = cutArray1;
Arrays.sort(directory); //sort in ascending order of numbers
Entry lookup1 = new Entry(extension);
int index1 = Arrays.binarySearch(directory, lookup1);
String surname1 = directory[index1].getSurname();
String initals1 = directory[index1].getInitials();
directory = origArray1;
int arrayPos = lookupNumberByName(surname1,initials1);
return arrayPos;
My compareTo method -
public int compareTo(Entry other) {
return this.surname.compareTo(other.getSurname());
}
Help very much appreciated
edit - I realize arrays are not the best data structure for this, but I have been specifically asked to use an array for this task.
Update - How exactly does sort(T[] a, Comparator<? super T> c) work? when I try writing my own Comparator -
public class numberSorter implements Comparator<Entry> {
#Override
public int compare(Entry o1, Entry o2) {
if (o1.getExtension() > o2.getExtension()) {
return 1;
}
if (o1.getExtension() == o2.getExtension()) {
return 0;
}
if (o1.getExtension() < o2.getExtension()) {
return -1;
}
return -1;
}
}
And calling Arrays.sort(directory,new numberSorter()); I get the following exception -
java.lang.NullPointerException
at java.lang.String.compareTo(Unknown Source)
at project.Entry.compareTo(Entry.java:45)
at project.Entry.compareTo(Entry.java:1)
at java.util.Arrays.binarySearch0(Unknown Source)
at java.util.Arrays.binarySearch(Unknown Source)
at project.ArrayDirectory.LookupByNumber(ArrayDirectory.java:128)
at project.test.main(test.java:29)
What exactly am I doing wrong?
Rather than keeping the Entry objects in Arrays, keep them in Maps. For example, you'd have one Map that mapped the Surname to the Entry, and another that mapped the Extension to the Entry. You can then efficiently look up the entry by Surname or Extension by calling the get() method on the appropriate Map.
If the Map is a TreeMap, the lookup is about the same speed as a binary search (O log(n)). If you use a HashMap, it can be even faster once you get a large number of entries.
I know the differences between Set and List(unique vs. duplications allowed, not ordered/ordered, etc). What I'm looking for is a set that keeps the elements ordered(that's easy), but I also need to be able to recover the index in which an element was inserted. So if I insert four elements, then I want to be able to know the order in which one of them was inserted.
MySet<String> set = MySet<String>();
set.add("one");
set.add("two");
set.add("three");
set.add("four");
int index = set.getIndex("two");
So at any given moment I can check if a String was already added, and get the index of the string in the set. Is there anything like this, or I need to implement it myself?
After creating Set just convert it to List and get by index from List:
Set<String> stringsSet = new HashSet<>();
stringsSet.add("string1");
stringsSet.add("string2");
List<String> stringsList = new ArrayList<>(stringsSet);
stringsList.get(0); // "string1";
stringsList.get(1); // "string2";
A small static custom method in a Util class would help:
public static <T> int getIndex(Set<T> set, T value) {
int result = 0;
for (T entry:set) {
if (entry.equals(value)) return result;
result++;
}
return -1;
}
If you need/want one class that is a Set and offers a getIndex() method, I strongly suggest to implement a new Set and use the decorator pattern:
public class IndexAwareSet<T> implements Set {
private Set<T> set;
public IndexAwareSet(Set<T> set) {
this.set = set;
}
// ... implement all methods from Set and delegate to the internal Set
public int getIndex(T entry) {
int result = 0;
for (T entry:set) {
if (entry.equals(value)) return result;
result++;
}
return -1;
}
}
you can extend LinkedHashSet adding your desired getIndex() method. It's 15 minutes to implement and test it. Just go through the set using iterator and counter, check the object for equality. If found, return the counter.
One solution (though not very pretty) is to use Apache common List/Set mutation
import org.apache.commons.collections.list.SetUniqueList;
final List<Long> vertexes=SetUniqueList.setUniqueList(new LinkedList<>());
it is a list without duplicates
https://commons.apache.org/proper/commons-collections/javadocs/api-3.2.2/index.html?org/apache/commons/collections/list/SetUniqueList.html
How about add the strings to a hashtable where the value is an index:
Hashtable<String, Integer> itemIndex = new Hashtable<>();
itemIndex.put("First String",1);
itemIndex.put("Second String",2);
itemIndex.put("Third String",3);
int indexOfThirdString = itemIndex.get("Third String");
you can send your set data to a new list
Java ArrayList<String> myList = new ArrayList<>(); myList.addAll(uniqueNameSet); myList.indexOf("xxx");
How could I go about detecting (returning true/false) whether an ArrayList contains more than one of the same element in Java?
Many thanks,
Terry
Edit
Forgot to mention that I am not looking to compare "Blocks" with each other but their integer values. Each "block" has an int and this is what makes them different.
I find the int of a particular Block by calling a method named "getNum" (e.g. table1[0][2].getNum();
Simplest: dump the whole collection into a Set (using the Set(Collection) constructor or Set.addAll), then see if the Set has the same size as the ArrayList.
List<Integer> list = ...;
Set<Integer> set = new HashSet<Integer>(list);
if(set.size() < list.size()){
/* There are duplicates */
}
Update: If I'm understanding your question correctly, you have a 2d array of Block, as in
Block table[][];
and you want to detect if any row of them has duplicates?
In that case, I could do the following, assuming that Block implements "equals" and "hashCode" correctly:
for (Block[] row : table) {
Set set = new HashSet<Block>();
for (Block cell : row) {
set.add(cell);
}
if (set.size() < 6) { //has duplicate
}
}
I'm not 100% sure of that for syntax, so it might be safer to write it as
for (int i = 0; i < 6; i++) {
Set set = new HashSet<Block>();
for (int j = 0; j < 6; j++)
set.add(table[i][j]);
...
Set.add returns a boolean false if the item being added is already in the set, so you could even short circuit and bale out on any add that returns false if all you want to know is whether there are any duplicates.
Improved code, using return value of Set#add instead of comparing the size of list and set.
public static <T> boolean hasDuplicate(Iterable<T> all) {
Set<T> set = new HashSet<T>();
// Set#add returns false if the set does not change, which
// indicates that a duplicate element has been added.
for (T each: all) if (!set.add(each)) return true;
return false;
}
With Java 8+ you can use Stream API:
boolean areAllDistinct(List<Block> blocksList) {
return blocksList.stream().map(Block::getNum).distinct().count() == blockList.size();
}
If you are looking to avoid having duplicates at all, then you should just cut out the middle process of detecting duplicates and use a Set.
Improved code to return the duplicate elements
Can find duplicates in a Collection
return the set of duplicates
Unique Elements can be obtained from the Set
public static <T> List getDuplicate(Collection<T> list) {
final List<T> duplicatedObjects = new ArrayList<T>();
Set<T> set = new HashSet<T>() {
#Override
public boolean add(T e) {
if (contains(e)) {
duplicatedObjects.add(e);
}
return super.add(e);
}
};
for (T t : list) {
set.add(t);
}
return duplicatedObjects;
}
public static <T> boolean hasDuplicate(Collection<T> list) {
if (getDuplicate(list).isEmpty())
return false;
return true;
}
I needed to do a similar operation for a Stream, but couldn't find a good example. Here's what I came up with.
public static <T> boolean areUnique(final Stream<T> stream) {
final Set<T> seen = new HashSet<>();
return stream.allMatch(seen::add);
}
This has the advantage of short-circuiting when duplicates are found early rather than having to process the whole stream and isn't much more complicated than just putting everything in a Set and checking the size. So this case would roughly be:
List<T> list = ...
boolean allDistinct = areUnique(list.stream());
If your elements are somehow Comparable (the fact that the order has any real meaning is indifferent -- it just needs to be consistent with your definition of equality), the fastest duplicate removal solution is going to sort the list ( 0(n log(n)) ) then to do a single pass and look for repeated elements (that is, equal elements that follow each other) (this is O(n)).
The overall complexity is going to be O(n log(n)), which is roughly the same as what you would get with a Set (n times long(n)), but with a much smaller constant. This is because the constant in sort/dedup results from the cost of comparing elements, whereas the cost from the set is most likely to result from a hash computation, plus one (possibly several) hash comparisons. If you are using a hash-based Set implementation, that is, because a Tree based is going to give you a O( n log²(n) ), which is even worse.
As I understand it, however, you do not need to remove duplicates, but merely test for their existence. So you should hand-code a merge or heap sort algorithm on your array, that simply exits returning true (i.e. "there is a dup") if your comparator returns 0, and otherwise completes the sort, and traverse the sorted array testing for repeats. In a merge or heap sort, indeed, when the sort is completed, you will have compared every duplicate pair unless both elements were already in their final positions (which is unlikely). Thus, a tweaked sort algorithm should yield a huge performance improvement (I would have to prove that, but I guess the tweaked algorithm should be in the O(log(n)) on uniformly random data)
If you want the set of duplicate values:
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class FindDuplicateInArrayList {
public static void main(String[] args) {
Set<String> uniqueSet = new HashSet<String>();
List<String> dupesList = new ArrayList<String>();
for (String a : args) {
if (uniqueSet.contains(a))
dupesList.add(a);
else
uniqueSet.add(a);
}
System.out.println(uniqueSet.size() + " distinct words: " + uniqueSet);
System.out.println(dupesList.size() + " dupesList words: " + dupesList);
}
}
And probably also think about trimming values or using lowercase ... depending on your case.
Simply put:
1) make sure all items are comparable
2) sort the array
2) iterate over the array and find duplicates
To know the Duplicates in a List use the following code:It will give you the set which contains duplicates.
public Set<?> findDuplicatesInList(List<?> beanList) {
System.out.println("findDuplicatesInList::"+beanList);
Set<Object> duplicateRowSet=null;
duplicateRowSet=new LinkedHashSet<Object>();
for(int i=0;i<beanList.size();i++){
Object superString=beanList.get(i);
System.out.println("findDuplicatesInList::superString::"+superString);
for(int j=0;j<beanList.size();j++){
if(i!=j){
Object subString=beanList.get(j);
System.out.println("findDuplicatesInList::subString::"+subString);
if(superString.equals(subString)){
duplicateRowSet.add(beanList.get(j));
}
}
}
}
System.out.println("findDuplicatesInList::duplicationSet::"+duplicateRowSet);
return duplicateRowSet;
}
best way to handle this issue is to use a HashSet :
ArrayList<String> listGroupCode = new ArrayList<>();
listGroupCode.add("A");
listGroupCode.add("A");
listGroupCode.add("B");
listGroupCode.add("C");
HashSet<String> set = new HashSet<>(listGroupCode);
ArrayList<String> result = new ArrayList<>(set);
Just print result arraylist and see the result without duplicates :)
This answer is wrriten in Kotlin, but can easily be translated to Java.
If your arraylist's size is within a fixed small range, then this is a great solution.
var duplicateDetected = false
if(arrList.size > 1){
for(i in 0 until arrList.size){
for(j in 0 until arrList.size){
if(i != j && arrList.get(i) == arrList.get(j)){
duplicateDetected = true
}
}
}
}
private boolean isDuplicate() {
for (int i = 0; i < arrayList.size(); i++) {
for (int j = i + 1; j < arrayList.size(); j++) {
if (arrayList.get(i).getName().trim().equalsIgnoreCase(arrayList.get(j).getName().trim())) {
return true;
}
}
}
return false;
}
String tempVal = null;
for (int i = 0; i < l.size(); i++) {
tempVal = l.get(i); //take the ith object out of list
while (l.contains(tempVal)) {
l.remove(tempVal); //remove all matching entries
}
l.add(tempVal); //at last add one entry
}
Note: this will have major performance hit though as items are removed from start of the list.
To address this, we have two options. 1) iterate in reverse order and remove elements. 2) Use LinkedList instead of ArrayList. Due to biased questions asked in interviews to remove duplicates from List without using any other collection, above example is the answer. In real world though, if I have to achieve this, I will put elements from List to Set, simple!
/**
* Method to detect presence of duplicates in a generic list.
* Depends on the equals method of the concrete type. make sure to override it as required.
*/
public static <T> boolean hasDuplicates(List<T> list){
int count = list.size();
T t1,t2;
for(int i=0;i<count;i++){
t1 = list.get(i);
for(int j=i+1;j<count;j++){
t2 = list.get(j);
if(t2.equals(t1)){
return true;
}
}
}
return false;
}
An example of a concrete class that has overridden equals() :
public class Reminder{
private long id;
private int hour;
private int minute;
public Reminder(long id, int hour, int minute){
this.id = id;
this.hour = hour;
this.minute = minute;
}
#Override
public boolean equals(Object other){
if(other == null) return false;
if(this.getClass() != other.getClass()) return false;
Reminder otherReminder = (Reminder) other;
if(this.hour != otherReminder.hour) return false;
if(this.minute != otherReminder.minute) return false;
return true;
}
}
ArrayList<String> withDuplicates = new ArrayList<>();
withDuplicates.add("1");
withDuplicates.add("2");
withDuplicates.add("1");
withDuplicates.add("3");
HashSet<String> set = new HashSet<>(withDuplicates);
ArrayList<String> withoutDupicates = new ArrayList<>(set);
ArrayList<String> duplicates = new ArrayList<String>();
Iterator<String> dupIter = withDuplicates.iterator();
while(dupIter.hasNext())
{
String dupWord = dupIter.next();
if(withDuplicates.contains(dupWord))
{
duplicates.add(dupWord);
}else{
withoutDupicates.add(dupWord);
}
}
System.out.println(duplicates);
System.out.println(withoutDupicates);
A simple solution for learners.
//Method to find the duplicates.
public static List<Integer> findDublicate(List<Integer> numList){
List<Integer> dupLst = new ArrayList<Integer>();
//Compare one number against all the other number except the self.
for(int i =0;i<numList.size();i++) {
for(int j=0 ; j<numList.size();j++) {
if(i!=j && numList.get(i)==numList.get(j)) {
boolean isNumExist = false;
//The below for loop is used for avoid the duplicate again in the result list
for(Integer aNum: dupLst) {
if(aNum==numList.get(i)) {
isNumExist = true;
break;
}
}
if(!isNumExist) {
dupLst.add(numList.get(i));
}
}
}
}
return dupLst;
}