Related
How could I go about detecting (returning true/false) whether an ArrayList contains more than one of the same element in Java?
Many thanks,
Terry
Edit
Forgot to mention that I am not looking to compare "Blocks" with each other but their integer values. Each "block" has an int and this is what makes them different.
I find the int of a particular Block by calling a method named "getNum" (e.g. table1[0][2].getNum();
Simplest: dump the whole collection into a Set (using the Set(Collection) constructor or Set.addAll), then see if the Set has the same size as the ArrayList.
List<Integer> list = ...;
Set<Integer> set = new HashSet<Integer>(list);
if(set.size() < list.size()){
/* There are duplicates */
}
Update: If I'm understanding your question correctly, you have a 2d array of Block, as in
Block table[][];
and you want to detect if any row of them has duplicates?
In that case, I could do the following, assuming that Block implements "equals" and "hashCode" correctly:
for (Block[] row : table) {
Set set = new HashSet<Block>();
for (Block cell : row) {
set.add(cell);
}
if (set.size() < 6) { //has duplicate
}
}
I'm not 100% sure of that for syntax, so it might be safer to write it as
for (int i = 0; i < 6; i++) {
Set set = new HashSet<Block>();
for (int j = 0; j < 6; j++)
set.add(table[i][j]);
...
Set.add returns a boolean false if the item being added is already in the set, so you could even short circuit and bale out on any add that returns false if all you want to know is whether there are any duplicates.
Improved code, using return value of Set#add instead of comparing the size of list and set.
public static <T> boolean hasDuplicate(Iterable<T> all) {
Set<T> set = new HashSet<T>();
// Set#add returns false if the set does not change, which
// indicates that a duplicate element has been added.
for (T each: all) if (!set.add(each)) return true;
return false;
}
With Java 8+ you can use Stream API:
boolean areAllDistinct(List<Block> blocksList) {
return blocksList.stream().map(Block::getNum).distinct().count() == blockList.size();
}
If you are looking to avoid having duplicates at all, then you should just cut out the middle process of detecting duplicates and use a Set.
Improved code to return the duplicate elements
Can find duplicates in a Collection
return the set of duplicates
Unique Elements can be obtained from the Set
public static <T> List getDuplicate(Collection<T> list) {
final List<T> duplicatedObjects = new ArrayList<T>();
Set<T> set = new HashSet<T>() {
#Override
public boolean add(T e) {
if (contains(e)) {
duplicatedObjects.add(e);
}
return super.add(e);
}
};
for (T t : list) {
set.add(t);
}
return duplicatedObjects;
}
public static <T> boolean hasDuplicate(Collection<T> list) {
if (getDuplicate(list).isEmpty())
return false;
return true;
}
I needed to do a similar operation for a Stream, but couldn't find a good example. Here's what I came up with.
public static <T> boolean areUnique(final Stream<T> stream) {
final Set<T> seen = new HashSet<>();
return stream.allMatch(seen::add);
}
This has the advantage of short-circuiting when duplicates are found early rather than having to process the whole stream and isn't much more complicated than just putting everything in a Set and checking the size. So this case would roughly be:
List<T> list = ...
boolean allDistinct = areUnique(list.stream());
If your elements are somehow Comparable (the fact that the order has any real meaning is indifferent -- it just needs to be consistent with your definition of equality), the fastest duplicate removal solution is going to sort the list ( 0(n log(n)) ) then to do a single pass and look for repeated elements (that is, equal elements that follow each other) (this is O(n)).
The overall complexity is going to be O(n log(n)), which is roughly the same as what you would get with a Set (n times long(n)), but with a much smaller constant. This is because the constant in sort/dedup results from the cost of comparing elements, whereas the cost from the set is most likely to result from a hash computation, plus one (possibly several) hash comparisons. If you are using a hash-based Set implementation, that is, because a Tree based is going to give you a O( n logĀ²(n) ), which is even worse.
As I understand it, however, you do not need to remove duplicates, but merely test for their existence. So you should hand-code a merge or heap sort algorithm on your array, that simply exits returning true (i.e. "there is a dup") if your comparator returns 0, and otherwise completes the sort, and traverse the sorted array testing for repeats. In a merge or heap sort, indeed, when the sort is completed, you will have compared every duplicate pair unless both elements were already in their final positions (which is unlikely). Thus, a tweaked sort algorithm should yield a huge performance improvement (I would have to prove that, but I guess the tweaked algorithm should be in the O(log(n)) on uniformly random data)
If you want the set of duplicate values:
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class FindDuplicateInArrayList {
public static void main(String[] args) {
Set<String> uniqueSet = new HashSet<String>();
List<String> dupesList = new ArrayList<String>();
for (String a : args) {
if (uniqueSet.contains(a))
dupesList.add(a);
else
uniqueSet.add(a);
}
System.out.println(uniqueSet.size() + " distinct words: " + uniqueSet);
System.out.println(dupesList.size() + " dupesList words: " + dupesList);
}
}
And probably also think about trimming values or using lowercase ... depending on your case.
Simply put:
1) make sure all items are comparable
2) sort the array
2) iterate over the array and find duplicates
To know the Duplicates in a List use the following code:It will give you the set which contains duplicates.
public Set<?> findDuplicatesInList(List<?> beanList) {
System.out.println("findDuplicatesInList::"+beanList);
Set<Object> duplicateRowSet=null;
duplicateRowSet=new LinkedHashSet<Object>();
for(int i=0;i<beanList.size();i++){
Object superString=beanList.get(i);
System.out.println("findDuplicatesInList::superString::"+superString);
for(int j=0;j<beanList.size();j++){
if(i!=j){
Object subString=beanList.get(j);
System.out.println("findDuplicatesInList::subString::"+subString);
if(superString.equals(subString)){
duplicateRowSet.add(beanList.get(j));
}
}
}
}
System.out.println("findDuplicatesInList::duplicationSet::"+duplicateRowSet);
return duplicateRowSet;
}
best way to handle this issue is to use a HashSet :
ArrayList<String> listGroupCode = new ArrayList<>();
listGroupCode.add("A");
listGroupCode.add("A");
listGroupCode.add("B");
listGroupCode.add("C");
HashSet<String> set = new HashSet<>(listGroupCode);
ArrayList<String> result = new ArrayList<>(set);
Just print result arraylist and see the result without duplicates :)
This answer is wrriten in Kotlin, but can easily be translated to Java.
If your arraylist's size is within a fixed small range, then this is a great solution.
var duplicateDetected = false
if(arrList.size > 1){
for(i in 0 until arrList.size){
for(j in 0 until arrList.size){
if(i != j && arrList.get(i) == arrList.get(j)){
duplicateDetected = true
}
}
}
}
private boolean isDuplicate() {
for (int i = 0; i < arrayList.size(); i++) {
for (int j = i + 1; j < arrayList.size(); j++) {
if (arrayList.get(i).getName().trim().equalsIgnoreCase(arrayList.get(j).getName().trim())) {
return true;
}
}
}
return false;
}
String tempVal = null;
for (int i = 0; i < l.size(); i++) {
tempVal = l.get(i); //take the ith object out of list
while (l.contains(tempVal)) {
l.remove(tempVal); //remove all matching entries
}
l.add(tempVal); //at last add one entry
}
Note: this will have major performance hit though as items are removed from start of the list.
To address this, we have two options. 1) iterate in reverse order and remove elements. 2) Use LinkedList instead of ArrayList. Due to biased questions asked in interviews to remove duplicates from List without using any other collection, above example is the answer. In real world though, if I have to achieve this, I will put elements from List to Set, simple!
/**
* Method to detect presence of duplicates in a generic list.
* Depends on the equals method of the concrete type. make sure to override it as required.
*/
public static <T> boolean hasDuplicates(List<T> list){
int count = list.size();
T t1,t2;
for(int i=0;i<count;i++){
t1 = list.get(i);
for(int j=i+1;j<count;j++){
t2 = list.get(j);
if(t2.equals(t1)){
return true;
}
}
}
return false;
}
An example of a concrete class that has overridden equals() :
public class Reminder{
private long id;
private int hour;
private int minute;
public Reminder(long id, int hour, int minute){
this.id = id;
this.hour = hour;
this.minute = minute;
}
#Override
public boolean equals(Object other){
if(other == null) return false;
if(this.getClass() != other.getClass()) return false;
Reminder otherReminder = (Reminder) other;
if(this.hour != otherReminder.hour) return false;
if(this.minute != otherReminder.minute) return false;
return true;
}
}
ArrayList<String> withDuplicates = new ArrayList<>();
withDuplicates.add("1");
withDuplicates.add("2");
withDuplicates.add("1");
withDuplicates.add("3");
HashSet<String> set = new HashSet<>(withDuplicates);
ArrayList<String> withoutDupicates = new ArrayList<>(set);
ArrayList<String> duplicates = new ArrayList<String>();
Iterator<String> dupIter = withDuplicates.iterator();
while(dupIter.hasNext())
{
String dupWord = dupIter.next();
if(withDuplicates.contains(dupWord))
{
duplicates.add(dupWord);
}else{
withoutDupicates.add(dupWord);
}
}
System.out.println(duplicates);
System.out.println(withoutDupicates);
A simple solution for learners.
//Method to find the duplicates.
public static List<Integer> findDublicate(List<Integer> numList){
List<Integer> dupLst = new ArrayList<Integer>();
//Compare one number against all the other number except the self.
for(int i =0;i<numList.size();i++) {
for(int j=0 ; j<numList.size();j++) {
if(i!=j && numList.get(i)==numList.get(j)) {
boolean isNumExist = false;
//The below for loop is used for avoid the duplicate again in the result list
for(Integer aNum: dupLst) {
if(aNum==numList.get(i)) {
isNumExist = true;
break;
}
}
if(!isNumExist) {
dupLst.add(numList.get(i));
}
}
}
}
return dupLst;
}
My program takes in thousands of PL/SQL functions, procedures and views, saves them as objects and then adds them to an array list. My array list stores objects with the following format:
ArrayList<PLSQLItemStore> storedList = new ArrayList<>();
storedList.add(new PLSQLItemStore(String, String, String, Long ));
storedList.add(new PLSQLItemStore(Name, Type, FileName, DatelastModified));
What I wanted to do is remove duplicate objects from the array-list based on their Name. The older object would be removed based on its dateLastModified variable. The approach i took was to have an outer loop and an inner loop with each object comparing themselves to every other object and then changing the name to "remove" if it was considered to be older. The program then does one final pass backwards through the array-list removing any objects whose name is set as "remove". While this works fine it seems extremely inefficient. 1000 objects will mean 1,000,000 passes need to be made. I was wondering if someone could help me make it more efficient? Thanks.
Sample Input:
storedList.add(new PLSQLItemStore("a", "function", "players.sql", 1234));
storedList.add(new PLSQLItemStore("a", "function", "team.sql", 2345));
storedList.add(new PLSQLItemStore("b", "function", "toon.sql", 1111));
storedList.add(new PLSQLItemStore("c", "function", "toon.sql", 2222));
storedList.add(new PLSQLItemStore("c", "function", "toon.sql", 1243));
storedList.add(new PLSQLItemStore("d", "function", "toon.sql", 3333));
ArrayList Iterator:
for(int i = 0; i < storedList.size();i++)
{
for(int k = 0; k < storedList.size();k++)
{
if (storedList.get(i).getName().equalsIgnoreCase("remove"))
{
System.out.println("This was already removed");
break;
}
if (storedList.get(i).getName().equalsIgnoreCase(storedList.get(k).getName()) && // checks to see if it is valid to be removed
!storedList.get(k).getName().equalsIgnoreCase("remove") &&
i != k )
{
if(storedList.get(i).getLastModified() >= storedList.get(k).getLastModified())
{
storedList.get(k).setName("remove");
System.out.println("Set To Remove");
}
else
{
System.out.println("Not Older");
}
}
}
}
Final Pass to remove Objects:
System.out.println("size: " + storedList.size());
for (int i= storedList.size() - 1; i >= 0; i--)
{
if (storedList.get(i).getName().equalsIgnoreCase("remove"))
{
System.out.println("removed: " + storedList.get(i).getName());
storedList.remove(i);
}
}
System.out.println("size: " + storedList.size());
You need to make PLSQLItemStore implement hashCode and equals methods and then you can use Set to remove the duplicates.
public class PLSQLItemStore {
private String name;
#Override
public int hashCode() {
int hash = 7;
hash = 47 * hash + (this.name != null ? this.name.hashCode() : 0);
return hash;
}
#Override
public boolean equals(Object obj) {
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final PLSQLItemStore other = (PLSQLItemStore) obj;
if ((this.name == null) ? (other.name != null) : !this.name.equals(other.name)) {
return false;
}
return true;
}
}
And then just do Set<PLSQLItemStore> withoutDups = new HashSet<>(storedList);
P.S. equals and hashCode are generated by NetBeans IDE.
Put them in a Guava ArrayListMultimap<String,PLSQLItemStore>.
Add each PLSQLItemStore using name as the key.
When you're done adding, loop through the multimap, sort each List with a Comparator<PLSQLItemStore> which sorts by dateLastModified, and pull out the last entry of each List - it will be the latest PLSQLItemStore.
Put these entries in another Map<String,PLSQLItemStore> (or List<PLSQLItemStore>, if you no longer care about the name) and throw away the ArrayListMultimap.
Building off of Petr Mensik's answer, you should implement equals and hashCode. From there, you can put items into the map. If you come across a duplicate, you can decide then what to do:
Map<String, PLSQLItemStore> storeMap = new HashMap<String, PLSQLItemStore>();
for(PLSQLItemStore currentStore : storedList) {
// See if an item exists in the map with this name
PLSQLItemStore buffStore = storeMap.get(currentStore.getName();
// If this value was never in the map, put it in the map and move on
if(buffStore == null) {
storeMap.put(currentStore.getName(), currentStore);
continue;
}
// If we've gotten here, then something is in buffStore.
// If buffStore is newer, put it in the map. Otherwise, do nothing
// (this might be backwards -- I didn't quite follow your logic.
// Feel free to correct me
if(buffStore.getLastModified() > currentStore.getLastModified())
storeMap.put(currentStore.getName(), currentStore);
}
Your map is dup-free. Because Map is a Collection, you can iterate through it later in your code:
for(PLSQLItemStore currentStore : storeMap) {
// Do whatever you want with your items
}
Is there a better approach to do the below in Java, without using external libraries.
I need to model group/child (tree like) structure of int (primitive). In Json
[{1,1}, {1,2}, {2,1},{3,1}]
I need to support addition/removal of elements (element is a pair {group, child} ) without duplication.
I am thinking of, keeping a data structure like.
ArrayList<HashMap<Integer,Integer>>
To add.
Iterate through ArrayList, check HashMap key and value against the value to insert, and insert if not exist.
To delete:
Iterate through ArrayList, check HashMap key and value against the value to delete, and delete if exist.
Is there a better data structure/approach with standard library.
As per one of the answer below, I made a class like this.
Please let me know anything to watchout. I am expecting (and going to try out) arraylist would handle add/remove correctly by using the equal method in KeyValue class. thanks.
static class KeyValue {
int groupPos;
int childPos;
KeyValue(int groupPos, int childPos) {
this.groupPos = groupPos;
this.childPos = childPos;
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
KeyValue keyValue = (KeyValue) o;
if (childPos != keyValue.childPos) return false;
if (groupPos != keyValue.groupPos) return false;
return true;
}
#Override
public int hashCode() {
int result = groupPos;
result = 31 * result + childPos;
return result;
}
}
If I understand what you're trying to do, this may be simpler:
TreeMap<Integer,TreeSet<Integer>>
or
HashMap<Integer,HashSet<Integer>>
So, rather than
[{1,1}, {1,2}, {2,1}, {3,1}]
you'd have
[{1, {1, 2}},
{2, {1}},
{3, {1}}]
Note that all 4 of the above classes automatically handles eliminating duplicates.
To add:
TreeMap<Integer, TreeSet<Integer>> map;
TreeSet<Integer> set = map.get(group);
if (set == null) // create it if it doesn't exist
{
set = new TreeSet<Integer>();
map.put(group, set);
}
set.add(child);
To remove:
TreeMap<Integer, TreeSet<Integer>> map;
TreeSet<Integer> set = map.get(group);
set.remove(child);
if (set.isEmpty()) // remove it if it is now empty
map.remove(group);
You may write a class with name KeyValue with two properties to hold group and child. Add KeyValue Objects to ArrayList. For CRUD operations, you may implement equals and compare in your KeyValue pair class.
Instead of HashMap, use a class called Pair with two fields {group,child} which will implement Comparable interface. Then implement/override its equals(), hashCode() and compareTo() methods. Then use either a List<Pair> or Set<Pair> depending on your needs to hold them. Having compareTo() implemented gives you the flexibility to sort Pairs easily too.
I am new to the Data Structure world but I think we can use this based on the assumption that no two Set Objects will be similar
Set validSet=new HashSet(); // Use Generics here
HashSet will provide a constant time for add/delete/contains
SomeObject{
Integer parent ;
Integer child;
//define equals method based on your requirement
}
Going By your Question i think that You want to show this line
[{1,1}, {1,2}, {2,1},{3,1}]
as
Group 1-> 1 , 2 (from first two pair) Group 2-> 1(from
third pair) Group 3-> 1 (from fourth pair)
The data structure that suites most for storing this hierarchy is :
Map<Integer,Set<Integer>> map = new HashMap<Integer,Set<Integer>>();
Where the key part of map stores the group Number. And the value part of map is storing TreeSet which stores the children of that group.
As Example of code:
import java.util.HashMap;
import java.util.ListIterator;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
import java.util.TreeSet;
class TreeLike
{
public static void main(String[] args)
{
Map<Integer,Set<Integer>> map = new HashMap<Integer,Set<Integer>>();
int groups[] = {1,2,3,4,5,6,7};
//To add new group in map
for (int i = 0 ; i < groups.length; i++)
{
Set<Integer> child = new TreeSet<Integer>();
child.add(1);child.add(2);child.add(3);child.add(4);child.add(5);
map.put(groups[i],child);
}
//To add new child(8) to a group (say group 1)
Set<Integer> child = map.get(1);
if (child != null)
{
child.add(8);
map.put(1,child);
}
//To remove a child (say child 4) from group 3
child = map.get(3);
if (child != null)
{
child.remove(4);
map.put(1,child);
}
//To Iterate through all trees
Set<Map.Entry<Integer,Set<Integer>>> entrySet = map.entrySet();
Iterator<Map.Entry<Integer,Set<Integer>>> iterator = entrySet.iterator();
while (iterator.hasNext())
{
Map.Entry<Integer,Set<Integer>> entry = iterator.next();
int group = entry.getKey();
Set<Integer> children = entry.getValue();
System.out.println("Group "+group+" children-->"+children);
}
}
}
I have an object as Riziv with three variables as id, cnk and product. Then I search in a databank for this object and add it to a ArrayList as ArrayList<Riziv> list.
Now I should checkout if all object in his array are the same cnk then return true otherwise I should return all objects which are not the same cnk with error message.
public class Riziv{ String id, cnk, product; }
ArrayList<Riziv> list = getArrayListFromDatabank(id);
public void getDuplicatedWhichHasTheSameCnk(){
}
}
Using standard JVM structures (MultiMap is provided by guava), you can do that:
public List<Riviz> getDuplicates(final List<Riviz> l)
{
final HashMap<String, List<Riviz>> m = new HashMap<String, List<Riviz>>();
final List<Riviz> ret = new ArrayList<Riviz>();
String cnk;
for (final Riviz r: l) {
cnk = r.getCnk();
if (!m.contains(cnk))
m.add(cnk, new ArrayList<Riviz>());
m.get(cnk).add(r);
}
List<Riviz> tmp;
for (final Map.Entry<String, List<Riviz>> entry: m.entrySet()) {
tmp = entry.getValue();
if (tmp.size() == 1) // no dups
continue;
ret.addAll(tmp);
}
return ret;
}
ret will contain the duplicates. You can change that function to return a Map<String, Riviz> instead, and filter out entries where the list size is only one. You'll then get a map with the conflicting cnks as keys and a list of dups as values.
I am not clear exactly what you want however I suspect you want something like this.
MultiMap<Key, Riziv> multiMap =
List<Riziv> list =
for(Riziv r: list)
multiMap.put(r.getCnk(), r);
for(Key cnk: multiMap.keySet()) {
Collection<Riziv> sameCnk = multiMap.get(cnk);
// check size and compare entries
}
The multi-map will have the list of Riziv objects for each Cnk.
One way to do it is write a comparator to sort the list by cnk String and then compare each consecutive cnk String to the next, if you find a duplicate, they will be right next to eachother.
1.) Sort the list using a comparator by sorting on the cnk variable.
2.) Compare each element in the list to the next for duplicates.
There's probably many other ways to solve this, this is just the first that came to mind.
I did not test this so you have been forewarned lol:
ArrayList<Riziv> rizArray = new ArrayList<Riziv>();
//Sort the array by the CNK variable.
Collections.sort(rizArray, new Comparator<Riziv>(){
#Override
public int compare(Riziv arg0, Riziv arg1) {
//Return the comparison of the Strings.
//Use .compareToIgnoreCase if you want to ignore upper/lower case.
return arg0.getCnk().compareTo(arg1.getCnk());
}
});
//List should be in alphabetical order at this point.
List<Riziv> duplicates = new ArrayList<Riziv>();
Riziv rizPrevious = null;
for(Riziv riz: rizArray){
if(rizPrevious == null){
rizPrevious = riz;
continue;
}
if(riz.getCnk().compareTo(rizPrevious.getCnk()) == 0){
duplicates.add(riz);
}
rizPrevious = riz;
}
How could I go about detecting (returning true/false) whether an ArrayList contains more than one of the same element in Java?
Many thanks,
Terry
Edit
Forgot to mention that I am not looking to compare "Blocks" with each other but their integer values. Each "block" has an int and this is what makes them different.
I find the int of a particular Block by calling a method named "getNum" (e.g. table1[0][2].getNum();
Simplest: dump the whole collection into a Set (using the Set(Collection) constructor or Set.addAll), then see if the Set has the same size as the ArrayList.
List<Integer> list = ...;
Set<Integer> set = new HashSet<Integer>(list);
if(set.size() < list.size()){
/* There are duplicates */
}
Update: If I'm understanding your question correctly, you have a 2d array of Block, as in
Block table[][];
and you want to detect if any row of them has duplicates?
In that case, I could do the following, assuming that Block implements "equals" and "hashCode" correctly:
for (Block[] row : table) {
Set set = new HashSet<Block>();
for (Block cell : row) {
set.add(cell);
}
if (set.size() < 6) { //has duplicate
}
}
I'm not 100% sure of that for syntax, so it might be safer to write it as
for (int i = 0; i < 6; i++) {
Set set = new HashSet<Block>();
for (int j = 0; j < 6; j++)
set.add(table[i][j]);
...
Set.add returns a boolean false if the item being added is already in the set, so you could even short circuit and bale out on any add that returns false if all you want to know is whether there are any duplicates.
Improved code, using return value of Set#add instead of comparing the size of list and set.
public static <T> boolean hasDuplicate(Iterable<T> all) {
Set<T> set = new HashSet<T>();
// Set#add returns false if the set does not change, which
// indicates that a duplicate element has been added.
for (T each: all) if (!set.add(each)) return true;
return false;
}
With Java 8+ you can use Stream API:
boolean areAllDistinct(List<Block> blocksList) {
return blocksList.stream().map(Block::getNum).distinct().count() == blockList.size();
}
If you are looking to avoid having duplicates at all, then you should just cut out the middle process of detecting duplicates and use a Set.
Improved code to return the duplicate elements
Can find duplicates in a Collection
return the set of duplicates
Unique Elements can be obtained from the Set
public static <T> List getDuplicate(Collection<T> list) {
final List<T> duplicatedObjects = new ArrayList<T>();
Set<T> set = new HashSet<T>() {
#Override
public boolean add(T e) {
if (contains(e)) {
duplicatedObjects.add(e);
}
return super.add(e);
}
};
for (T t : list) {
set.add(t);
}
return duplicatedObjects;
}
public static <T> boolean hasDuplicate(Collection<T> list) {
if (getDuplicate(list).isEmpty())
return false;
return true;
}
I needed to do a similar operation for a Stream, but couldn't find a good example. Here's what I came up with.
public static <T> boolean areUnique(final Stream<T> stream) {
final Set<T> seen = new HashSet<>();
return stream.allMatch(seen::add);
}
This has the advantage of short-circuiting when duplicates are found early rather than having to process the whole stream and isn't much more complicated than just putting everything in a Set and checking the size. So this case would roughly be:
List<T> list = ...
boolean allDistinct = areUnique(list.stream());
If your elements are somehow Comparable (the fact that the order has any real meaning is indifferent -- it just needs to be consistent with your definition of equality), the fastest duplicate removal solution is going to sort the list ( 0(n log(n)) ) then to do a single pass and look for repeated elements (that is, equal elements that follow each other) (this is O(n)).
The overall complexity is going to be O(n log(n)), which is roughly the same as what you would get with a Set (n times long(n)), but with a much smaller constant. This is because the constant in sort/dedup results from the cost of comparing elements, whereas the cost from the set is most likely to result from a hash computation, plus one (possibly several) hash comparisons. If you are using a hash-based Set implementation, that is, because a Tree based is going to give you a O( n logĀ²(n) ), which is even worse.
As I understand it, however, you do not need to remove duplicates, but merely test for their existence. So you should hand-code a merge or heap sort algorithm on your array, that simply exits returning true (i.e. "there is a dup") if your comparator returns 0, and otherwise completes the sort, and traverse the sorted array testing for repeats. In a merge or heap sort, indeed, when the sort is completed, you will have compared every duplicate pair unless both elements were already in their final positions (which is unlikely). Thus, a tweaked sort algorithm should yield a huge performance improvement (I would have to prove that, but I guess the tweaked algorithm should be in the O(log(n)) on uniformly random data)
If you want the set of duplicate values:
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class FindDuplicateInArrayList {
public static void main(String[] args) {
Set<String> uniqueSet = new HashSet<String>();
List<String> dupesList = new ArrayList<String>();
for (String a : args) {
if (uniqueSet.contains(a))
dupesList.add(a);
else
uniqueSet.add(a);
}
System.out.println(uniqueSet.size() + " distinct words: " + uniqueSet);
System.out.println(dupesList.size() + " dupesList words: " + dupesList);
}
}
And probably also think about trimming values or using lowercase ... depending on your case.
Simply put:
1) make sure all items are comparable
2) sort the array
2) iterate over the array and find duplicates
To know the Duplicates in a List use the following code:It will give you the set which contains duplicates.
public Set<?> findDuplicatesInList(List<?> beanList) {
System.out.println("findDuplicatesInList::"+beanList);
Set<Object> duplicateRowSet=null;
duplicateRowSet=new LinkedHashSet<Object>();
for(int i=0;i<beanList.size();i++){
Object superString=beanList.get(i);
System.out.println("findDuplicatesInList::superString::"+superString);
for(int j=0;j<beanList.size();j++){
if(i!=j){
Object subString=beanList.get(j);
System.out.println("findDuplicatesInList::subString::"+subString);
if(superString.equals(subString)){
duplicateRowSet.add(beanList.get(j));
}
}
}
}
System.out.println("findDuplicatesInList::duplicationSet::"+duplicateRowSet);
return duplicateRowSet;
}
best way to handle this issue is to use a HashSet :
ArrayList<String> listGroupCode = new ArrayList<>();
listGroupCode.add("A");
listGroupCode.add("A");
listGroupCode.add("B");
listGroupCode.add("C");
HashSet<String> set = new HashSet<>(listGroupCode);
ArrayList<String> result = new ArrayList<>(set);
Just print result arraylist and see the result without duplicates :)
This answer is wrriten in Kotlin, but can easily be translated to Java.
If your arraylist's size is within a fixed small range, then this is a great solution.
var duplicateDetected = false
if(arrList.size > 1){
for(i in 0 until arrList.size){
for(j in 0 until arrList.size){
if(i != j && arrList.get(i) == arrList.get(j)){
duplicateDetected = true
}
}
}
}
private boolean isDuplicate() {
for (int i = 0; i < arrayList.size(); i++) {
for (int j = i + 1; j < arrayList.size(); j++) {
if (arrayList.get(i).getName().trim().equalsIgnoreCase(arrayList.get(j).getName().trim())) {
return true;
}
}
}
return false;
}
String tempVal = null;
for (int i = 0; i < l.size(); i++) {
tempVal = l.get(i); //take the ith object out of list
while (l.contains(tempVal)) {
l.remove(tempVal); //remove all matching entries
}
l.add(tempVal); //at last add one entry
}
Note: this will have major performance hit though as items are removed from start of the list.
To address this, we have two options. 1) iterate in reverse order and remove elements. 2) Use LinkedList instead of ArrayList. Due to biased questions asked in interviews to remove duplicates from List without using any other collection, above example is the answer. In real world though, if I have to achieve this, I will put elements from List to Set, simple!
/**
* Method to detect presence of duplicates in a generic list.
* Depends on the equals method of the concrete type. make sure to override it as required.
*/
public static <T> boolean hasDuplicates(List<T> list){
int count = list.size();
T t1,t2;
for(int i=0;i<count;i++){
t1 = list.get(i);
for(int j=i+1;j<count;j++){
t2 = list.get(j);
if(t2.equals(t1)){
return true;
}
}
}
return false;
}
An example of a concrete class that has overridden equals() :
public class Reminder{
private long id;
private int hour;
private int minute;
public Reminder(long id, int hour, int minute){
this.id = id;
this.hour = hour;
this.minute = minute;
}
#Override
public boolean equals(Object other){
if(other == null) return false;
if(this.getClass() != other.getClass()) return false;
Reminder otherReminder = (Reminder) other;
if(this.hour != otherReminder.hour) return false;
if(this.minute != otherReminder.minute) return false;
return true;
}
}
ArrayList<String> withDuplicates = new ArrayList<>();
withDuplicates.add("1");
withDuplicates.add("2");
withDuplicates.add("1");
withDuplicates.add("3");
HashSet<String> set = new HashSet<>(withDuplicates);
ArrayList<String> withoutDupicates = new ArrayList<>(set);
ArrayList<String> duplicates = new ArrayList<String>();
Iterator<String> dupIter = withDuplicates.iterator();
while(dupIter.hasNext())
{
String dupWord = dupIter.next();
if(withDuplicates.contains(dupWord))
{
duplicates.add(dupWord);
}else{
withoutDupicates.add(dupWord);
}
}
System.out.println(duplicates);
System.out.println(withoutDupicates);
A simple solution for learners.
//Method to find the duplicates.
public static List<Integer> findDublicate(List<Integer> numList){
List<Integer> dupLst = new ArrayList<Integer>();
//Compare one number against all the other number except the self.
for(int i =0;i<numList.size();i++) {
for(int j=0 ; j<numList.size();j++) {
if(i!=j && numList.get(i)==numList.get(j)) {
boolean isNumExist = false;
//The below for loop is used for avoid the duplicate again in the result list
for(Integer aNum: dupLst) {
if(aNum==numList.get(i)) {
isNumExist = true;
break;
}
}
if(!isNumExist) {
dupLst.add(numList.get(i));
}
}
}
}
return dupLst;
}