Related
I have an ArrayList<String>, and I want to remove repeated strings from it. How can I do this?
If you don't want duplicates in a Collection, you should consider why you're using a Collection that allows duplicates. The easiest way to remove repeated elements is to add the contents to a Set (which will not allow duplicates) and then add the Set back to the ArrayList:
Set<String> set = new HashSet<>(yourList);
yourList.clear();
yourList.addAll(set);
Of course, this destroys the ordering of the elements in the ArrayList.
Although converting the ArrayList to a HashSet effectively removes duplicates, if you need to preserve insertion order, I'd rather suggest you to use this variant
// list is some List of Strings
Set<String> s = new LinkedHashSet<>(list);
Then, if you need to get back a List reference, you can use again the conversion constructor.
In Java 8:
List<String> deduped = list.stream().distinct().collect(Collectors.toList());
Please note that the hashCode-equals contract for list members should be respected for the filtering to work properly.
Suppose we have a list of String like:
List<String> strList = new ArrayList<>(5);
// insert up to five items to list.
Then we can remove duplicate elements in multiple ways.
Prior to Java 8
List<String> deDupStringList = new ArrayList<>(new HashSet<>(strList));
Note: If we want to maintain the insertion order then we need to use LinkedHashSet in place of HashSet
Using Guava
List<String> deDupStringList2 = Lists.newArrayList(Sets.newHashSet(strList));
Using Java 8
List<String> deDupStringList3 = strList.stream().distinct().collect(Collectors.toList());
Note: In case we want to collect the result in a specific list implementation e.g. LinkedList then we can modify the above example as:
List<String> deDupStringList3 = strList.stream().distinct()
.collect(Collectors.toCollection(LinkedList::new));
We can use parallelStream also in the above code but it may not give expected performace benefits. Check this question for more.
If you don't want duplicates, use a Set instead of a List. To convert a List to a Set you can use the following code:
// list is some List of Strings
Set<String> s = new HashSet<String>(list);
If really necessary you can use the same construction to convert a Set back into a List.
Java 8 streams provide a very simple way to remove duplicate elements from a list. Using the distinct method.
If we have a list of cities and we want to remove duplicates from that list it can be done in a single line -
List<String> cityList = new ArrayList<>();
cityList.add("Delhi");
cityList.add("Mumbai");
cityList.add("Bangalore");
cityList.add("Chennai");
cityList.add("Kolkata");
cityList.add("Mumbai");
cityList = cityList.stream().distinct().collect(Collectors.toList());
How to remove duplicate elements from an arraylist
You can also do it this way, and preserve order:
// delete duplicates (if any) from 'myArrayList'
myArrayList = new ArrayList<String>(new LinkedHashSet<String>(myArrayList));
Here's a way that doesn't affect your list ordering:
ArrayList l1 = new ArrayList();
ArrayList l2 = new ArrayList();
Iterator iterator = l1.iterator();
while (iterator.hasNext()) {
YourClass o = (YourClass) iterator.next();
if(!l2.contains(o)) l2.add(o);
}
l1 is the original list, and l2 is the list without repeated items
(Make sure YourClass has the equals method according to what you want to stand for equality)
this can solve the problem:
private List<SomeClass> clearListFromDuplicateFirstName(List<SomeClass> list1) {
Map<String, SomeClass> cleanMap = new LinkedHashMap<String, SomeClass>();
for (int i = 0; i < list1.size(); i++) {
cleanMap.put(list1.get(i).getFirstName(), list1.get(i));
}
List<SomeClass> list = new ArrayList<SomeClass>(cleanMap.values());
return list;
}
It is possible to remove duplicates from arraylist without using HashSet or one more arraylist.
Try this code..
ArrayList<String> lst = new ArrayList<String>();
lst.add("ABC");
lst.add("ABC");
lst.add("ABCD");
lst.add("ABCD");
lst.add("ABCE");
System.out.println("Duplicates List "+lst);
Object[] st = lst.toArray();
for (Object s : st) {
if (lst.indexOf(s) != lst.lastIndexOf(s)) {
lst.remove(lst.lastIndexOf(s));
}
}
System.out.println("Distinct List "+lst);
Output is
Duplicates List [ABC, ABC, ABCD, ABCD, ABCE]
Distinct List [ABC, ABCD, ABCE]
There is also ImmutableSet from Guava as an option (here is the documentation):
ImmutableSet.copyOf(list);
Probably a bit overkill, but I enjoy this kind of isolated problem. :)
This code uses a temporary Set (for the uniqueness check) but removes elements directly inside the original list. Since element removal inside an ArrayList can induce a huge amount of array copying, the remove(int)-method is avoided.
public static <T> void removeDuplicates(ArrayList<T> list) {
int size = list.size();
int out = 0;
{
final Set<T> encountered = new HashSet<T>();
for (int in = 0; in < size; in++) {
final T t = list.get(in);
final boolean first = encountered.add(t);
if (first) {
list.set(out++, t);
}
}
}
while (out < size) {
list.remove(--size);
}
}
While we're at it, here's a version for LinkedList (a lot nicer!):
public static <T> void removeDuplicates(LinkedList<T> list) {
final Set<T> encountered = new HashSet<T>();
for (Iterator<T> iter = list.iterator(); iter.hasNext(); ) {
final T t = iter.next();
final boolean first = encountered.add(t);
if (!first) {
iter.remove();
}
}
}
Use the marker interface to present a unified solution for List:
public static <T> void removeDuplicates(List<T> list) {
if (list instanceof RandomAccess) {
// use first version here
} else {
// use other version here
}
}
EDIT: I guess the generics-stuff doesn't really add any value here.. Oh well. :)
public static void main(String[] args){
ArrayList<Object> al = new ArrayList<Object>();
al.add("abc");
al.add('a');
al.add('b');
al.add('a');
al.add("abc");
al.add(10.3);
al.add('c');
al.add(10);
al.add("abc");
al.add(10);
System.out.println("Before Duplicate Remove:"+al);
for(int i=0;i<al.size();i++){
for(int j=i+1;j<al.size();j++){
if(al.get(i).equals(al.get(j))){
al.remove(j);
j--;
}
}
}
System.out.println("After Removing duplicate:"+al);
}
If you're willing to use a third-party library, you can use the method distinct() in Eclipse Collections (formerly GS Collections).
ListIterable<Integer> integers = FastList.newListWith(1, 3, 1, 2, 2, 1);
Assert.assertEquals(
FastList.newListWith(1, 3, 2),
integers.distinct());
The advantage of using distinct() instead of converting to a Set and then back to a List is that distinct() preserves the order of the original List, retaining the first occurrence of each element. It's implemented by using both a Set and a List.
MutableSet<T> seenSoFar = UnifiedSet.newSet();
int size = list.size();
for (int i = 0; i < size; i++)
{
T item = list.get(i);
if (seenSoFar.add(item))
{
targetCollection.add(item);
}
}
return targetCollection;
If you cannot convert your original List into an Eclipse Collections type, you can use ListAdapter to get the same API.
MutableList<Integer> distinct = ListAdapter.adapt(integers).distinct();
Note: I am a committer for Eclipse Collections.
If you are using model type List< T>/ArrayList< T> . Hope,it's help you.
Here is my code without using any other data structure like set or hashmap
for (int i = 0; i < Models.size(); i++){
for (int j = i + 1; j < Models.size(); j++) {
if (Models.get(i).getName().equals(Models.get(j).getName())) {
Models.remove(j);
j--;
}
}
}
If you want to preserve your Order then it is best to use LinkedHashSet.
Because if you want to pass this List to an Insert Query by Iterating it, the order would be preserved.
Try this
LinkedHashSet link=new LinkedHashSet();
List listOfValues=new ArrayList();
listOfValues.add(link);
This conversion will be very helpful when you want to return a List but not a Set.
This three lines of code can remove the duplicated element from ArrayList or any collection.
List<Entity> entities = repository.findByUserId(userId);
Set<Entity> s = new LinkedHashSet<Entity>(entities);
entities.clear();
entities.addAll(s);
for(int a=0;a<myArray.size();a++){
for(int b=a+1;b<myArray.size();b++){
if(myArray.get(a).equalsIgnoreCase(myArray.get(b))){
myArray.remove(b);
dups++;
b--;
}
}
}
When you are filling the ArrayList, use a condition for each element. For example:
ArrayList< Integer > al = new ArrayList< Integer >();
// fill 1
for ( int i = 0; i <= 5; i++ )
if ( !al.contains( i ) )
al.add( i );
// fill 2
for (int i = 0; i <= 10; i++ )
if ( !al.contains( i ) )
al.add( i );
for( Integer i: al )
{
System.out.print( i + " ");
}
We will get an array {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Code:
List<String> duplicatList = new ArrayList<String>();
duplicatList = Arrays.asList("AA","BB","CC","DD","DD","EE","AA","FF");
//above AA and DD are duplicate
Set<String> uniqueList = new HashSet<String>(duplicatList);
duplicatList = new ArrayList<String>(uniqueList); //let GC will doing free memory
System.out.println("Removed Duplicate : "+duplicatList);
Note: Definitely, there will be memory overhead.
ArrayList<String> city=new ArrayList<String>();
city.add("rajkot");
city.add("gondal");
city.add("rajkot");
city.add("gova");
city.add("baroda");
city.add("morbi");
city.add("gova");
HashSet<String> hashSet = new HashSet<String>();
hashSet.addAll(city);
city.clear();
city.addAll(hashSet);
Toast.makeText(getActivity(),"" + city.toString(),Toast.LENGTH_SHORT).show();
you can use nested loop in follow :
ArrayList<Class1> l1 = new ArrayList<Class1>();
ArrayList<Class1> l2 = new ArrayList<Class1>();
Iterator iterator1 = l1.iterator();
boolean repeated = false;
while (iterator1.hasNext())
{
Class1 c1 = (Class1) iterator1.next();
for (Class1 _c: l2) {
if(_c.getId() == c1.getId())
repeated = true;
}
if(!repeated)
l2.add(c1);
}
LinkedHashSet will do the trick.
String[] arr2 = {"5","1","2","3","3","4","1","2"};
Set<String> set = new LinkedHashSet<String>(Arrays.asList(arr2));
for(String s1 : set)
System.out.println(s1);
System.out.println( "------------------------" );
String[] arr3 = set.toArray(new String[0]);
for(int i = 0; i < arr3.length; i++)
System.out.println(arr3[i].toString());
//output: 5,1,2,3,4
List<String> result = new ArrayList<String>();
Set<String> set = new LinkedHashSet<String>();
String s = "ravi is a good!boy. But ravi is very nasty fellow.";
StringTokenizer st = new StringTokenizer(s, " ,. ,!");
while (st.hasMoreTokens()) {
result.add(st.nextToken());
}
System.out.println(result);
set.addAll(result);
result.clear();
result.addAll(set);
System.out.println(result);
output:
[ravi, is, a, good, boy, But, ravi, is, very, nasty, fellow]
[ravi, is, a, good, boy, But, very, nasty, fellow]
This is used for your Custom Objects list
public List<Contact> removeDuplicates(List<Contact> list) {
// Set set1 = new LinkedHashSet(list);
Set set = new TreeSet(new Comparator() {
#Override
public int compare(Object o1, Object o2) {
if (((Contact) o1).getId().equalsIgnoreCase(((Contact) o2).getId()) /*&&
((Contact)o1).getName().equalsIgnoreCase(((Contact)o2).getName())*/) {
return 0;
}
return 1;
}
});
set.addAll(list);
final List newList = new ArrayList(set);
return newList;
}
As said before, you should use a class implementing the Set interface instead of List to be sure of the unicity of elements. If you have to keep the order of elements, the SortedSet interface can then be used; the TreeSet class implements that interface.
import java.util.*;
class RemoveDupFrmString
{
public static void main(String[] args)
{
String s="appsc";
Set<Character> unique = new LinkedHashSet<Character> ();
for(char c : s.toCharArray()) {
System.out.println(unique.add(c));
}
for(char dis:unique){
System.out.println(dis);
}
}
}
public Set<Object> findDuplicates(List<Object> list) {
Set<Object> items = new HashSet<Object>();
Set<Object> duplicates = new HashSet<Object>();
for (Object item : list) {
if (items.contains(item)) {
duplicates.add(item);
} else {
items.add(item);
}
}
return duplicates;
}
ArrayList<String> list = new ArrayList<String>();
HashSet<String> unique = new LinkedHashSet<String>();
HashSet<String> dup = new LinkedHashSet<String>();
boolean b = false;
list.add("Hello");
list.add("Hello");
list.add("how");
list.add("are");
list.add("u");
list.add("u");
for(Iterator iterator= list.iterator();iterator.hasNext();)
{
String value = (String)iterator.next();
System.out.println(value);
if(b==unique.add(value))
dup.add(value);
else
unique.add(value);
}
System.out.println(unique);
System.out.println(dup);
If you want to remove duplicates from ArrayList means find the below logic,
public static Object[] removeDuplicate(Object[] inputArray)
{
long startTime = System.nanoTime();
int totalSize = inputArray.length;
Object[] resultArray = new Object[totalSize];
int newSize = 0;
for(int i=0; i<totalSize; i++)
{
Object value = inputArray[i];
if(value == null)
{
continue;
}
for(int j=i+1; j<totalSize; j++)
{
if(value.equals(inputArray[j]))
{
inputArray[j] = null;
}
}
resultArray[newSize++] = value;
}
long endTime = System.nanoTime()-startTime;
System.out.println("Total Time-B:"+endTime);
return resultArray;
}
I have an arraylist containing some strings:
ArrayList<String> strList = new ArrayList<String>();
strList.addAll(Arrays.asList("interface", "list", "Primitive", "class", "primitive", "List", "Interface", "lIst", "Primitive"));
I have wrote a method to remove the case insensitive strings of the arraylist:
public static ArrayList<String> removeDuplicates(ArrayList<String> strList) {
for(int i = 0; i < strList.size(); i++) {
for(int j = i + 1; j < strList.size(); j++) {
if(strList.get(i).equalsIgnoreCase(strList.get(j))){
strList.remove(j);
j--;
}
}
}
return strList;
}
Ouput:
[interface, list, Primitive, class]
However, I am trying to remove just the first occurance of the strings. I am trying to make it so my output would equal:
[Interface, lIst, Primitive, class]
Which would be the last occurrences of the duplicates in the arraylist
What I'm trying to do specifically:
The version of the string that remains is the same as the last occurrence. In other words, the
version of the last occurrence stays at the location of the first occurrence
I think that remove from the ArrayList is not good idea. It is better using Map to create new list:
public static List<String> removeDuplicates(List<String> strList) {
Map<String, String> map = new LinkedHashMap<>();
strList.forEach(item -> map.put(item.toLowerCase(), item));
return new ArrayList<>(map.values());
}
Input: [interface, list, Primitive, class, primitive, List, Interface, lIst, Primitive]
Output: [Interface, lIst, Primitive, class]
P.S.
Same with one line Stream, but a bit not so clear:
public static List<String> removeDuplicates(List<String> strList) {
return new ArrayList<>(strList.stream().collect(Collectors.toMap(String::toLowerCase, str -> str, (prev, next) -> next, LinkedHashMap::new)).values());
}
Right now it keeps the first occurrence, so if you want to keep the last occurrence you can just go through the list in reverse:
for(int i = strList.size() - 1; i >= 0; i--) {
for(int j = i - 1; j >= 0; j--) {
...
I have an ArrayList<String>, and I want to remove repeated strings from it. How can I do this?
If you don't want duplicates in a Collection, you should consider why you're using a Collection that allows duplicates. The easiest way to remove repeated elements is to add the contents to a Set (which will not allow duplicates) and then add the Set back to the ArrayList:
Set<String> set = new HashSet<>(yourList);
yourList.clear();
yourList.addAll(set);
Of course, this destroys the ordering of the elements in the ArrayList.
Although converting the ArrayList to a HashSet effectively removes duplicates, if you need to preserve insertion order, I'd rather suggest you to use this variant
// list is some List of Strings
Set<String> s = new LinkedHashSet<>(list);
Then, if you need to get back a List reference, you can use again the conversion constructor.
In Java 8:
List<String> deduped = list.stream().distinct().collect(Collectors.toList());
Please note that the hashCode-equals contract for list members should be respected for the filtering to work properly.
Suppose we have a list of String like:
List<String> strList = new ArrayList<>(5);
// insert up to five items to list.
Then we can remove duplicate elements in multiple ways.
Prior to Java 8
List<String> deDupStringList = new ArrayList<>(new HashSet<>(strList));
Note: If we want to maintain the insertion order then we need to use LinkedHashSet in place of HashSet
Using Guava
List<String> deDupStringList2 = Lists.newArrayList(Sets.newHashSet(strList));
Using Java 8
List<String> deDupStringList3 = strList.stream().distinct().collect(Collectors.toList());
Note: In case we want to collect the result in a specific list implementation e.g. LinkedList then we can modify the above example as:
List<String> deDupStringList3 = strList.stream().distinct()
.collect(Collectors.toCollection(LinkedList::new));
We can use parallelStream also in the above code but it may not give expected performace benefits. Check this question for more.
If you don't want duplicates, use a Set instead of a List. To convert a List to a Set you can use the following code:
// list is some List of Strings
Set<String> s = new HashSet<String>(list);
If really necessary you can use the same construction to convert a Set back into a List.
Java 8 streams provide a very simple way to remove duplicate elements from a list. Using the distinct method.
If we have a list of cities and we want to remove duplicates from that list it can be done in a single line -
List<String> cityList = new ArrayList<>();
cityList.add("Delhi");
cityList.add("Mumbai");
cityList.add("Bangalore");
cityList.add("Chennai");
cityList.add("Kolkata");
cityList.add("Mumbai");
cityList = cityList.stream().distinct().collect(Collectors.toList());
How to remove duplicate elements from an arraylist
You can also do it this way, and preserve order:
// delete duplicates (if any) from 'myArrayList'
myArrayList = new ArrayList<String>(new LinkedHashSet<String>(myArrayList));
Here's a way that doesn't affect your list ordering:
ArrayList l1 = new ArrayList();
ArrayList l2 = new ArrayList();
Iterator iterator = l1.iterator();
while (iterator.hasNext()) {
YourClass o = (YourClass) iterator.next();
if(!l2.contains(o)) l2.add(o);
}
l1 is the original list, and l2 is the list without repeated items
(Make sure YourClass has the equals method according to what you want to stand for equality)
this can solve the problem:
private List<SomeClass> clearListFromDuplicateFirstName(List<SomeClass> list1) {
Map<String, SomeClass> cleanMap = new LinkedHashMap<String, SomeClass>();
for (int i = 0; i < list1.size(); i++) {
cleanMap.put(list1.get(i).getFirstName(), list1.get(i));
}
List<SomeClass> list = new ArrayList<SomeClass>(cleanMap.values());
return list;
}
It is possible to remove duplicates from arraylist without using HashSet or one more arraylist.
Try this code..
ArrayList<String> lst = new ArrayList<String>();
lst.add("ABC");
lst.add("ABC");
lst.add("ABCD");
lst.add("ABCD");
lst.add("ABCE");
System.out.println("Duplicates List "+lst);
Object[] st = lst.toArray();
for (Object s : st) {
if (lst.indexOf(s) != lst.lastIndexOf(s)) {
lst.remove(lst.lastIndexOf(s));
}
}
System.out.println("Distinct List "+lst);
Output is
Duplicates List [ABC, ABC, ABCD, ABCD, ABCE]
Distinct List [ABC, ABCD, ABCE]
There is also ImmutableSet from Guava as an option (here is the documentation):
ImmutableSet.copyOf(list);
Probably a bit overkill, but I enjoy this kind of isolated problem. :)
This code uses a temporary Set (for the uniqueness check) but removes elements directly inside the original list. Since element removal inside an ArrayList can induce a huge amount of array copying, the remove(int)-method is avoided.
public static <T> void removeDuplicates(ArrayList<T> list) {
int size = list.size();
int out = 0;
{
final Set<T> encountered = new HashSet<T>();
for (int in = 0; in < size; in++) {
final T t = list.get(in);
final boolean first = encountered.add(t);
if (first) {
list.set(out++, t);
}
}
}
while (out < size) {
list.remove(--size);
}
}
While we're at it, here's a version for LinkedList (a lot nicer!):
public static <T> void removeDuplicates(LinkedList<T> list) {
final Set<T> encountered = new HashSet<T>();
for (Iterator<T> iter = list.iterator(); iter.hasNext(); ) {
final T t = iter.next();
final boolean first = encountered.add(t);
if (!first) {
iter.remove();
}
}
}
Use the marker interface to present a unified solution for List:
public static <T> void removeDuplicates(List<T> list) {
if (list instanceof RandomAccess) {
// use first version here
} else {
// use other version here
}
}
EDIT: I guess the generics-stuff doesn't really add any value here.. Oh well. :)
public static void main(String[] args){
ArrayList<Object> al = new ArrayList<Object>();
al.add("abc");
al.add('a');
al.add('b');
al.add('a');
al.add("abc");
al.add(10.3);
al.add('c');
al.add(10);
al.add("abc");
al.add(10);
System.out.println("Before Duplicate Remove:"+al);
for(int i=0;i<al.size();i++){
for(int j=i+1;j<al.size();j++){
if(al.get(i).equals(al.get(j))){
al.remove(j);
j--;
}
}
}
System.out.println("After Removing duplicate:"+al);
}
If you're willing to use a third-party library, you can use the method distinct() in Eclipse Collections (formerly GS Collections).
ListIterable<Integer> integers = FastList.newListWith(1, 3, 1, 2, 2, 1);
Assert.assertEquals(
FastList.newListWith(1, 3, 2),
integers.distinct());
The advantage of using distinct() instead of converting to a Set and then back to a List is that distinct() preserves the order of the original List, retaining the first occurrence of each element. It's implemented by using both a Set and a List.
MutableSet<T> seenSoFar = UnifiedSet.newSet();
int size = list.size();
for (int i = 0; i < size; i++)
{
T item = list.get(i);
if (seenSoFar.add(item))
{
targetCollection.add(item);
}
}
return targetCollection;
If you cannot convert your original List into an Eclipse Collections type, you can use ListAdapter to get the same API.
MutableList<Integer> distinct = ListAdapter.adapt(integers).distinct();
Note: I am a committer for Eclipse Collections.
If you are using model type List< T>/ArrayList< T> . Hope,it's help you.
Here is my code without using any other data structure like set or hashmap
for (int i = 0; i < Models.size(); i++){
for (int j = i + 1; j < Models.size(); j++) {
if (Models.get(i).getName().equals(Models.get(j).getName())) {
Models.remove(j);
j--;
}
}
}
If you want to preserve your Order then it is best to use LinkedHashSet.
Because if you want to pass this List to an Insert Query by Iterating it, the order would be preserved.
Try this
LinkedHashSet link=new LinkedHashSet();
List listOfValues=new ArrayList();
listOfValues.add(link);
This conversion will be very helpful when you want to return a List but not a Set.
This three lines of code can remove the duplicated element from ArrayList or any collection.
List<Entity> entities = repository.findByUserId(userId);
Set<Entity> s = new LinkedHashSet<Entity>(entities);
entities.clear();
entities.addAll(s);
for(int a=0;a<myArray.size();a++){
for(int b=a+1;b<myArray.size();b++){
if(myArray.get(a).equalsIgnoreCase(myArray.get(b))){
myArray.remove(b);
dups++;
b--;
}
}
}
When you are filling the ArrayList, use a condition for each element. For example:
ArrayList< Integer > al = new ArrayList< Integer >();
// fill 1
for ( int i = 0; i <= 5; i++ )
if ( !al.contains( i ) )
al.add( i );
// fill 2
for (int i = 0; i <= 10; i++ )
if ( !al.contains( i ) )
al.add( i );
for( Integer i: al )
{
System.out.print( i + " ");
}
We will get an array {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Code:
List<String> duplicatList = new ArrayList<String>();
duplicatList = Arrays.asList("AA","BB","CC","DD","DD","EE","AA","FF");
//above AA and DD are duplicate
Set<String> uniqueList = new HashSet<String>(duplicatList);
duplicatList = new ArrayList<String>(uniqueList); //let GC will doing free memory
System.out.println("Removed Duplicate : "+duplicatList);
Note: Definitely, there will be memory overhead.
ArrayList<String> city=new ArrayList<String>();
city.add("rajkot");
city.add("gondal");
city.add("rajkot");
city.add("gova");
city.add("baroda");
city.add("morbi");
city.add("gova");
HashSet<String> hashSet = new HashSet<String>();
hashSet.addAll(city);
city.clear();
city.addAll(hashSet);
Toast.makeText(getActivity(),"" + city.toString(),Toast.LENGTH_SHORT).show();
you can use nested loop in follow :
ArrayList<Class1> l1 = new ArrayList<Class1>();
ArrayList<Class1> l2 = new ArrayList<Class1>();
Iterator iterator1 = l1.iterator();
boolean repeated = false;
while (iterator1.hasNext())
{
Class1 c1 = (Class1) iterator1.next();
for (Class1 _c: l2) {
if(_c.getId() == c1.getId())
repeated = true;
}
if(!repeated)
l2.add(c1);
}
LinkedHashSet will do the trick.
String[] arr2 = {"5","1","2","3","3","4","1","2"};
Set<String> set = new LinkedHashSet<String>(Arrays.asList(arr2));
for(String s1 : set)
System.out.println(s1);
System.out.println( "------------------------" );
String[] arr3 = set.toArray(new String[0]);
for(int i = 0; i < arr3.length; i++)
System.out.println(arr3[i].toString());
//output: 5,1,2,3,4
List<String> result = new ArrayList<String>();
Set<String> set = new LinkedHashSet<String>();
String s = "ravi is a good!boy. But ravi is very nasty fellow.";
StringTokenizer st = new StringTokenizer(s, " ,. ,!");
while (st.hasMoreTokens()) {
result.add(st.nextToken());
}
System.out.println(result);
set.addAll(result);
result.clear();
result.addAll(set);
System.out.println(result);
output:
[ravi, is, a, good, boy, But, ravi, is, very, nasty, fellow]
[ravi, is, a, good, boy, But, very, nasty, fellow]
This is used for your Custom Objects list
public List<Contact> removeDuplicates(List<Contact> list) {
// Set set1 = new LinkedHashSet(list);
Set set = new TreeSet(new Comparator() {
#Override
public int compare(Object o1, Object o2) {
if (((Contact) o1).getId().equalsIgnoreCase(((Contact) o2).getId()) /*&&
((Contact)o1).getName().equalsIgnoreCase(((Contact)o2).getName())*/) {
return 0;
}
return 1;
}
});
set.addAll(list);
final List newList = new ArrayList(set);
return newList;
}
As said before, you should use a class implementing the Set interface instead of List to be sure of the unicity of elements. If you have to keep the order of elements, the SortedSet interface can then be used; the TreeSet class implements that interface.
import java.util.*;
class RemoveDupFrmString
{
public static void main(String[] args)
{
String s="appsc";
Set<Character> unique = new LinkedHashSet<Character> ();
for(char c : s.toCharArray()) {
System.out.println(unique.add(c));
}
for(char dis:unique){
System.out.println(dis);
}
}
}
public Set<Object> findDuplicates(List<Object> list) {
Set<Object> items = new HashSet<Object>();
Set<Object> duplicates = new HashSet<Object>();
for (Object item : list) {
if (items.contains(item)) {
duplicates.add(item);
} else {
items.add(item);
}
}
return duplicates;
}
ArrayList<String> list = new ArrayList<String>();
HashSet<String> unique = new LinkedHashSet<String>();
HashSet<String> dup = new LinkedHashSet<String>();
boolean b = false;
list.add("Hello");
list.add("Hello");
list.add("how");
list.add("are");
list.add("u");
list.add("u");
for(Iterator iterator= list.iterator();iterator.hasNext();)
{
String value = (String)iterator.next();
System.out.println(value);
if(b==unique.add(value))
dup.add(value);
else
unique.add(value);
}
System.out.println(unique);
System.out.println(dup);
If you want to remove duplicates from ArrayList means find the below logic,
public static Object[] removeDuplicate(Object[] inputArray)
{
long startTime = System.nanoTime();
int totalSize = inputArray.length;
Object[] resultArray = new Object[totalSize];
int newSize = 0;
for(int i=0; i<totalSize; i++)
{
Object value = inputArray[i];
if(value == null)
{
continue;
}
for(int j=i+1; j<totalSize; j++)
{
if(value.equals(inputArray[j]))
{
inputArray[j] = null;
}
}
resultArray[newSize++] = value;
}
long endTime = System.nanoTime()-startTime;
System.out.println("Total Time-B:"+endTime);
return resultArray;
}
I have a stream of strings from a csv file. These strings are converted to arrays and must be put in an Object's setter and the Object in a hashMap as a value. How do i concatenate all comming Arrays into one and only then use the Set method? Is there any better solution than concatenating the arrays before the set method?
Here is my code:
HashMap<Integer, Publication> innerMap = new HashMap<>();
try {
CsvReader csv = new CsvReader(filename);
csv.readHeaders();
while (csv.readRecord()) {
int id = Integer.parseInt(csv.get("ID"));
Publication pub = new Publication();
String names = csv.get("Names");
String[] namesArr = names.split(",");
if (!innerMap.containsKey(id)) {
innerMap.put(id, new Publication());
}
String[] merged = ????
pub.setNames(merged);
innerMap.put(au.getIdx(), pub);
}
csv.close();
} catch (IOException e) {
System.out.println("Exception : " + e);
}
Store them in a List first:
List<String[]> list = new ArrayList<>;
...
list.add(namesArr);
Then, once you've finished reading:
int size = 0;
for (String[] arr : list) {
size += arr.length;
}
List<String> all = new ArrayList<>(size);
for (String[] arr : list) {
all.addAll(Arrays.asList(arr));
}
The first loop helps to allocate the necessary memory to hold all of the data (otherwise there may be lots of reallocations and array copying in the ArrayList internally while you are adding elements to it in the second loop).
this has already been answered using Apache commons - How can I concatenate two arrays in Java?
Here's a pure java 8 way
String[] arr1 = { "a", "b", "c", "d" };
String[] arr2 = { "e", "f", "g" };
Stream<String> stream1 = Stream.of(arr1);
Stream<String> stream2 = Stream.of(arr2);
String[] arr = Stream.concat(stream1, stream2).toArray(String[]::new);
It looks like that if the map key exists, you want to extract the values, append additional values and then put it back in.
I would use a getter, then run this concat function which returns a new array. Since an Array is capped by its size, you cant grow it unless you make a new array and copy everything over.
to concat 2 string arrays where A comes first:
String[] concat( String[] a, String[] b){
String[] out = new String[a.length + b.length]();
int i = 0;
for (int j = 0; j < a.length; j++){
out[i] = a[j]
i++;
}
for (int j = 0; j < b.length; j++){
out[i] = b[j];
i++;
}
return out;
}
Is this correct method to sort ArrayList?
The problem is that the list is not sorted.
out = new StringTokenizer(input.toString());
n = (out.countTokens());
for (int i = 0; i < n; i++) {
String[] words = { out.nextToken().toString() };
final List<String> wordList = Arrays.asList(words);
Collections.sort(wordList);
System.out.println(wordList.toString());
}
Each of your words[] arrays is composed of a single string, obtained from the next token of your StringTokenizer. And you are iterating in exact order of the tokenization. So yes, your output will not be sorted. I presume you wanted to do something like this:
out = new StringTokenizer(input.toString());
int count = out.countTokens():
List<String> wordList = new ArrayList<String>(count);
for(int i = 0; i < count; i++) {
wordList.add(out.nextToken());
}
Collections.sort(wordList);
But, don't use the tokenizer class, its legacy. The following code will serve you better:
List<String> wordList = Arrays.asList(input.split("\\s"));
Collections.sort(wordList);
out.nextToken().toString() give you one string. Your array length should be 1, I presume.
Even if you put this into a loop, you sort at each loop, you have to sort outside the loop.
StringTokenizer out = new StringTokenizer( input.toString());
List<String> wordList = new ArrayList< String >();
while( out.hasMoreTokens()) {
wordList.add( out.nextToken());
}
Collections.sort( wordList );
System.out.println(wordList.toString());