as the title, I'd like to store data from result set to hash map and then use them for further processing (max, min, avg, grouping).
So far, I achieved this by using a proper hash map and implementing each operation from scratch - iterating over the hash map (key, value) pairs.
My question is: does it exist a library that performs such operations?
For example, a method that computes the maximum value over a List or a method that, given two same-size arrays, performs a "index-to-index" difference.
Thanks in advance.
Well there is the Collection class for instance. There is a bunch of useful static methods but you'll have to read and choose the one you need. Here is the documentation:
https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html
This class consists exclusively of static methods that operate on or
return collections.
Example:
List<Integer> list = new ArrayList<>();
List<String> stringList = new ArrayList<>();
// Populate the lists
for(int i=0; i<=10; ++i){
list.add(i);
String newString = "String " + i;
stringList.add(newString);
}
// add another negative value to the integer list
list.add(-1939);
// Print the min value from integer list and max value form the string list.
System.out.println("Max value: " + Collections.min(list));
System.out.println("Max value: " + Collections.max(stringList));
The output will be:
run:
Max value: -1939
Max value: String 9
BUILD SUCCESSFUL (total time: 0 seconds)
Similar question, however, was answered before for example here:
how to get maximum value from the List/ArrayList
There are some usefull functions in Collections API already.
For example max or min
Collections.max(arrayList);
Please investigate collections documentation to see if there is a function that you need. Probably there woulde be.
You can use java 8 streams for this.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class Testing {
public static void main(String[] args) {
//List of integers
List<Integer> list = new ArrayList<>();
list.add(7);
list.add(5);
list.add(4);
list.add(6);
list.add(9);
list.add(11);
list.add(12);
//get sorted list using streams
System.out.println(list.stream().sorted().collect(Collectors.toList()));
//find min value in list
System.out.println(list.stream().min(Integer::compareTo).get());
//find max value in list
System.out.println(list.stream().max(Integer::compareTo).get());
//find average of list
System.out.println(list.stream().mapToInt(val->val).average().getAsDouble());
//Map of integers
Map<Integer,Integer> map = new HashMap<>();
map.put(1, 10);
map.put(2, 12);
map.put(3, 15);
//find max value in map
System.out.println(map.entrySet().stream().max((entry1,entry2) -> entry1.getValue() > entry2.getValue() ? 1: -1).get().getValue());
//find key of max value in map
System.out.println(map.entrySet().stream().max((entry1,entry2) -> entry1.getValue() > entry2.getValue() ? 1: -1).get().getKey());
//find min value in map
System.out.println(map.entrySet().stream().min((entry1,entry2) -> entry1.getValue() > entry2.getValue() ? 1: -1).get().getValue());
//find key of max value in map
System.out.println(map.entrySet().stream().min((entry1,entry2) -> entry1.getValue() > entry2.getValue() ? 1: -1).get().getKey());
//find average of values in map
System.out.println(map.entrySet().stream().map(Map.Entry::getValue).mapToInt(val ->val).average().getAsDouble());
}
}
Keep in mind that it will only work if your system has jdk 1.8 .For lower version of jdk streams are not supported.
In Java8 there are IntSummaryStatistics, LongSummaryStatistics, DoubleSummaryStatistics to calculate max,min,count,average and sum
public static void main(String[] args) {
List<Employee> resultSet = ...
Map<String, DoubleSummaryStatistics> stats = resultSet.stream().collect(Collectors.groupingBy(Employee::getName, Collectors.summarizingDouble(Employee::getSalary)));
stats.forEach((n, stat) -> System.out.println("Name " + n + " Average " + stat.getAverage() + " Max " + stat.getMax())); // min, sum, count can also be taken from stat
}
static class Employee {
String name;
Double salary;
public String getName() {
return name;
}
public Double getSalary() {
return salary;
}
}
For max, min, avg you can use Java 8 and it's stream processing.
Related
I have an Item class which contains a code, quantity and amount fields, and a list of items which may contain many items (with same code). I want to group the items by code and sum up their quantities and amounts.
I was able to achieve half of it using stream's groupingBy and reduce. The grouping by worked, but the reduce is reducing all of the grouped items into one single item repeated over the different codes (groupingBy key).
Shouldn't reduce here reduce the list of items for each code from the map? Why is it retuning the same combined item for all.
Below is a sample code.
import java.util.List;
import java.util.Arrays;
import java.util.stream.Collectors;
import java.util.Map;
class HelloWorld {
public static void main(String[] args) {
List<Item> itemList = Arrays.asList(
createItem("CODE1", 1, 12),
createItem("CODE2", 4, 22),
createItem("CODE3", 5, 50),
createItem("CODE4", 2, 11),
createItem("CODE4", 8, 20),
createItem("CODE2", 1, 42)
);
Map<String, Item> aggregatedItems = itemList
.stream()
.collect(Collectors.groupingBy(
Item::getCode,
Collectors.reducing(new Item(), (aggregatedItem, item) -> {
int aggregatedQuantity = aggregatedItem.getQuantity();
double aggregatedAmount = aggregatedItem.getAmount();
aggregatedItem.setQuantity(aggregatedQuantity + item.getQuantity());
aggregatedItem.setAmount(aggregatedAmount + item.getAmount());
return aggregatedItem;
})
));
System.out.println("Map total size: " + aggregatedItems.size()); // expected 4
System.out.println();
aggregatedItems.forEach((key, value) -> {
System.out.println("key: " + key);
System.out.println("value - quantity: " + value.getQuantity() + " - amount: " + value.getAmount());
System.out.println();
});
}
private static Item createItem(String code, int quantity, double amount) {
Item item = new Item();
item.setCode(code);
item.setQuantity(quantity);
item.setAmount(amount);
return item;
}
}
class Item {
private String code;
private int quantity;
private double amount;
public Item() {
quantity = 0;
amount = 0.0;
}
public String getCode() { return code; }
public int getQuantity() { return quantity; }
public double getAmount() { return amount; }
public void setCode(String code) { this.code = code; }
public void setQuantity(int quantity) { this.quantity = quantity; }
public void setAmount(double amount) { this.amount = amount; }
}
and below is the output.
Map total size: 4
key: CODE2
value - quantity: 21 - amount: 157.0
key: CODE1
value - quantity: 21 - amount: 157.0
key: CODE4
value - quantity: 21 - amount: 157.0
key: CODE3
value - quantity: 21 - amount: 157.0
You must not modify the input arguments to Collectors.reducing. new Item() is only executed once and all your reduction operations will share the same "aggregation instance". In other words: the map will contain the same value instance 4 times (you can easily check yourself with System.identityHashCode() or by comparing for reference-equality: aggregatedItems.get("CODE1") == aggregatedItems.get("CODE2")).
Instead, return a new result instance:
final Map<String, Item> aggregatedItems = itemList
.stream()
.collect(Collectors.groupingBy(
Item::getCode,
Collectors.reducing(new Item(), (item1, item2) -> {
final Item reduced = new Item();
reduced.setQuantity(item1.getQuantity() + item2.getQuantity());
reduced.setAmount(item1.getAmount() + item2.getAmount());
return reduced;
})
));
Output:
Map total size: 4
key: CODE2
value - quantity: 5 - amount: 64.0
key: CODE1
value - quantity: 1 - amount: 12.0
key: CODE4
value - quantity: 10 - amount: 31.0
key: CODE3
value - quantity: 5 - amount: 50.0
You are using reducing, which assumes that you won't mutate the accumulator passed in. reducing won't create new Items for you for each new group, and expects you to create new Items and return them in the lambda, like this:
// this works as expected
.collect(Collectors.groupingBy(
Item::getCode,
Collectors.reducing(new Item(), (item1, item2) -> createItem(
item1.getCode(),
item1.getQuantity() + item2.getQuantity(),
item1.getAmount() + item2.getAmount()
))
));
so it is very suitable if you are using immutable objects like numbers or strings.
Since you are not creating new Items in your code, reducing keeps on reusing that same instance, resulting in the behaviour you see.
If you want to mutate the objects, you can do mutable reduction in a thread safe way with Collector.of:
.collect(Collectors.groupingBy(
Item::getCode,
Collector.of(Item::new, (aggregatedItem, item) -> {
int aggregatedQuantity = aggregatedItem.getQuantity();
double aggregatedAmount = aggregatedItem.getAmount();
aggregatedItem.setQuantity(aggregatedQuantity + item.getQuantity());
aggregatedItem.setAmount(aggregatedAmount + item.getAmount());
}, (item1, item2) -> createItem(
item1.getCode(),
item1.getQuantity() + item2.getQuantity(),
item1.getAmount() + item2.getAmount()
))
));
Notice that you now pass the reference to Item's constructor, i.e. a way to create new Items when necessary, as opposed to just a single new Item(). In addition, you also provide a third argument, the combiner, that tells the collector how to create a new item from two existing ones, which will be used if this collector is used in a concurrent situation. (See here for more info about the combiner)
This contrast between Collector.of and Collectors.reducing is the same contrast between Stream.reduce and Stream.collect. Learn more here.
Mutable reduction vs Immutable reduction
In this case, Collectors.reducing() isn't the right tool because it meant for immutable reduction, i.e. for performing folding operation in which every reduction step results in creation of a new immutable object.
But instead of generating a new object at each reduction step, you're changing the state of the object provided as an identity.
As a consequence, you're getting an incorrect result because the identity object would be created only once per thread. This single instance of the Item is used for accumulation, and reference to it end up in every value of the map.
More elaborate information you can find in the Stream API documentation, specifically in these parts: Reduction and Mutable Reduction.
And here's a short quote explaining how Stream.reduce() works (the mechanism behind Collectors.reducing() is the same):
The accumulator function takes a partial result and the next element, and produces a new partial result.
Use mutable reduction
The problem can be fixed by generating a new instance of Item while accumulating values mapped to the same key with, but a more performant approach would be to use mutable reduction instead.
For that, you can implement a custom collector created via static method Collector.of():
Map<String, Item> aggregatedItems = itemList.stream()
.collect(Collectors.groupingBy(
Item::getCode,
Collector.of(
Item::new, // mutable container of the collector
Item::merge, // accumulator - defines how stream data should be accumulated
Item::merge // combiner - mergin the two containers while executing stream in parallel
)
));
For convenience, you can introduce method merge() responsible for accumulating properties of the two items. It would allow to avoid repeating the same logic in accumulator and combiner, and keep the collector implementation lean and well-readable.
public class Item {
private String code;
private int quantity;
private double amount;
// getters, constructor, etc.
public Item merge(Item other) {
this.quantity += other.quantity;
this.amount += other.amount;
return this;
}
}
I'm working on the next exercise from HackerRank: https://www.hackerrank.com/challenges/migratory-birds/problem?isFullScreen=false
So far I need to optimize my sourcecode in order to pass the tests related to time execution
This is my sourcecode:
class Result {
/*
* Complete the 'migratoryBirds' function below.
*
* The function is expected to return an INTEGER.
* The function accepts INTEGER_ARRAY arr as parameter.
*/
public static int migratoryBirds(List<Integer> arr) {
// Write your code here
int coincidences = 0;
int maxValuesPerCategory = 0;
//I'm using TreeMap because sort isthe key on this exercise
Map<Integer, Integer> results = new TreeMap<>();
List<Integer> targetKeys = new ArrayList<>();
//1. classifying values by coincidences
for(Integer element: arr){
coincidences = Collections.frequency(arr, element);
results.put(element, coincidences);
}
/*
2. filtering categories by highest coincidences,
if there are more than 1, choose the label with the lowest value
example: 4=5; 3=5 ->output= 3
*/
//getting the value with most coincidences
maxValuesPerCategory = Collections.max(results.values());
//iterate the map to identify which keys have the maxvalue
Set<Integer> keySet = results.keySet();
for(Integer key : keySet){
if(results.get(key) == maxValuesPerCategory){
targetKeys.add(key);
}
}
//3. sorting the list ascending to obtain the lowest value
Collections.sort(targetKeys);
//get the first value (it should be the lowest label category)
return targetKeys.get(0);
}
}
I would like to ask you about suggestions how to optimize stages 2 and 3 because, from my point of view, the first stage is efficient in terms of execution but if you have suggestions about it please let me know.
Thanks a lot in advanced
I have two lists. One shows number of successful attempts for each individual in a group of people for some game.
public class SuccessfulAttempts{
String name;
int successCount;
}
List<SuccessfulAttempts> success;
And total number of attempts for each individual.
public class TotalAttempts{
String name;
int totalCount;
}
List<TotalAttempts> total;
And I want to show the percentage success for each person in the group.
public class PercentageSuccess{
String name;
float percentage;
}
List<PercentageSuccess> percentage;
And assume I have populate first two lists like this.
success.add(new SuccessfulAttempts(Alice, 4));
success.add(new SuccessfulAttempts(Bob, 7));
total.add(new TotalAttempts(Alice, 5));
total.add(new TotalAttempts(Bob, 10));
Now I want to calculate the percentage success for each person using Java Streams. So I actually need this kind of a result for the list List<PercentageSuccess> percentage.
new PercentageSuccess(Alice, 80);
new PercentageSuccess(Bob, 70);
And I want to calculate them (Alice's percentage and Bob's percentage) in parallel (I know how to do sequentially using a loop). How I can achieve this with Java Streams (or any other simple way)??
I would suggest converting one of your list to a Map for Easier access of count. Else for each value of one list you've to loop in the other list which will be O(n^2) complexity.
List<SuccessfulAttempts> success = new ArrayList<>();
List<TotalAttempts> total = new ArrayList<>();
success.add(new SuccessfulAttempts("Alice", 4));
success.add(new SuccessfulAttempts("Bob", 7));
total.add(new TotalAttempts("Alice", 5));
total.add(new TotalAttempts("Bob", 10));
// First create a Map
Map<String, Integer> attemptsMap = success.parallelStream()
.collect(Collectors.toMap(SuccessfulAttempts::getName, SuccessfulAttempts::getSuccessCount));
// Loop through the list of players and calculate percentage.
List<PercentageSuccess> percentage =
total.parallelStream()
// Remove players who have not participated from List 'total'. ('attempt' refers to single element in List 'total').
.filter(attempt -> attemptsMap.containsKey(attempt.getName()))
// Calculate percentage and create the required object
.map(attempt -> new PercentageSuccess(attempt.getName(),
((attemptsMap.get(attempt.getName()) * 100) / attempt.getTotalCount())))
// Collect it back to list
.collect(Collectors.toList());
percentage.forEach(System.out::println);
If arrays are of same same size and correctly ordered, you can use integer indexes to access original list elements.
List<PercentageSuccess> result = IntStream.range(0, size).parallel().mapToObj(index -> /*get the elements and construct percentage progress for person with given index*/).collect(Collectors.toList())
This means you have to create a method or custructor for PercentageSuccess which construncts a percentage for given SuccessAttempts and TotalAttempts.
PercentageSuccess(SuccessfulAttempts success, TotalAttempts total) {
this.name = success.name;
this.percentage = (float) success.successCount / (float) total.totalCount;
}
Then you construct a stream of integers from 0 to size which is parallel:
IntStream.range(0, size).parallel()
this is actually parallel for loop. Then turn each integer into PercentageSuccess of index'th person (note that you must enshure that lists are of same size and not shuffled, otherwice my code is not correct).
.mapToObj(index -> new PercentageSuccess(success.get(index), total.get(index))
and finally turn Stream to List with
.collect(Collectors.toList())
Also, this approach is not optimal in case success or total are LinkedList or other list implementation with O(n) cost of accessing element by index.
private static List<PercentageAttempts> percentage(List<SuccessfulAttempts> success, List<TotalAttempts> total) {
Map<String, Integer> successMap = success.parallelStream()
.collect(Collectors.toMap(SuccessfulAttempts::getName, SuccessfulAttempts::getSuccessCount, (a, b) -> a + b));
Map<String, Integer> totalMap = total.parallelStream()
.collect(Collectors.toMap(TotalAttempts::getName, TotalAttempts::getTotalCount));
return successMap.entrySet().parallelStream().map(entry -> new PercentageAttempts(entry.getKey(),
entry.getValue() * 1.0f / totalMap.get(entry.getKey()) * 100))
.collect(Collectors.toList());
}
I have collected some records from file and want to perform group by and minimum on the records similar to SQL. Records is in the form of key value pairs where value is not a float or double value like:
here values are version numbers like every software has version numbers based on releases like : 10.1.1 , 10.1.2, 10.1.3 etc
A 1.12
A 1.13
A 1.45
B 5.6
B 4.5
C 5.6.4
Output should be -> A 1.12
B 4.5
C 5.6.4
Initially I started to solve this problem by using a HashMap data structure:
Map<String,List<String>> map = new HashMap<>();
As values are not float or double I iterate through all values and removed decimal point and concatenated digits to form integer.
Eg : A 112 A 113
I got stuck at point , how to find key which has minimum value ? I tried to use TreeMap but no luck.
Can anyone help me how to find key which has minimum value ?
output should be : A 1.12 B 4.5 C 5.6.4
for single record like Eg : C 5.6.4 , minimum is single record.
Based on my data structure selection, Map<String,List<Integer>>, I am stuck is how to find key which has minimum value, like we do in SQL queries, using group by and min aggregate function. , here i got A -> [] A -> [] A -> [] B -> [] B -> [] C -> [] ** Here challenge is finding minimum among multiple list for same key ** , as you can see based on my data structure selection same key has multiple lists.
Please Find the below solution :
You can maintain a HashMap with "key" as String and "value" as PriorityQueue.
HashMap<String,PriorityQueue<String>> map = new HashMap<String,PriorityQueue<String>>();
You can group the values by the Key and can maintain the values in the PriorityQueue.
Java
's PriorityQueue is a Min Heap with the smallest value stored at the root.
when you invoke the peek() method on the priorityQueue it will return the min value stored at the root.
Below is the sample code which will help you :
import java.util.Collection;
import java.util.Comparator;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.PriorityQueue;
public class GroupAndFindMinimum {
public static void main(String[] args) {
HashMap<String,PriorityQueue<String>> map = new HashMap<String,PriorityQueue<String>>();
PriorityQueue<String> q1 = new PriorityQueue<String>();
q1.add("1.12");q1.add("1.13");q1.add("1.45");
PriorityQueue<String> q2 = new PriorityQueue<String>();
q2.add("5.6");q2.add("4.5");
PriorityQueue<String> q3 = new PriorityQueue<String>();
q3.add("5.6.4");
map.put("A",q1);
map.put("B", q2);
map.put("C", q3);
for(Iterator<? extends Map.Entry<? extends String, ? extends PriorityQueue<String>>> it = map.entrySet().iterator(); it.hasNext() ;)
{
Map.Entry<? extends String, ? extends PriorityQueue<String>> t = it.next();
System.out.println(t.getKey() + " " + t.getValue().peek());
}
}
}
Below is the Output of above program :
A 1.12
B 4.5
C 5.6.4
If you need the MAX value to be returned for each group then you can achieve it with the help of an "Comparator" as well.
Below is the code for that :
import java.util.Collection;
import java.util.Comparator;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.PriorityQueue;
public class GroupAndFindMinimum {
public static void main(String[] args) {
HashMap<String,PriorityQueue<String>> map = new HashMap<String,PriorityQueue<String>>();
comparatorPQ comp = new comparatorPQ<String>();
PriorityQueue<String> q1 = new PriorityQueue<String>(3,comp);
q1.add("1.12");q1.add("1.13");q1.add("1.45");
PriorityQueue<String> q2 = new PriorityQueue<String>(2,comp);
q2.add("5.6");q2.add("4.5");
PriorityQueue<String> q3 = new PriorityQueue<String>(1,comp);
q3.add("5.6.4");
map.put("A",q1);
map.put("B", q2);
map.put("C", q3);
for(Iterator<? extends Map.Entry<? extends String, ? extends PriorityQueue<String>>> it = map.entrySet().iterator(); it.hasNext() ;)
{
Map.Entry<? extends String, ? extends PriorityQueue<String>> t = it.next();
System.out.println(t.getKey() + " " + t.getValue().peek());
}
}
}
class comparatorPQ<k> implements Comparator
{
#Override
public int compare(Object a1, Object b1) {
String a = null ,b= null;
if(a1 instanceof String)
a = (String)a1;
if(b1 instanceof String)
b = (String)b1;
if( b.compareTo(a) > 1 )
return 1;
else if(b.compareTo(a) < 1)
return -1;
return 0;
}
}
Output :
A 1.45
B 5.6
C 5.6.4
The first challenge would be to find the "minimal" value. Simply removing the periods and treating the values as integers is insufficient - that would result in 6.5.4. being "smaller" than 1.2.3.4., which doesn't seem to be what you intended. A better approach would be to split the strings by the period and treat each element individually as an int:
public String min(String v1, String v2) {
// Any string should be "smaller" than null
if (v1 == null) {
return v2;
}
if (v2 == null) {
return v1;
}
// Split both of them and iterate the common indexes:
String[] v1parts = v1.split("\\.");
String[] v2parts = v2.split("\\.");
int commonLenth = Math.min(v1parts.length, v2parts.length);
for (int i = 0; i < commonLength; ++i) {
int v1elem = Integer.parseInt(v1parts[i]);
int v2elem = Integer.parseInt(v2parts[i]);
if (v1elem < v2elem) {
return v1;
} else if (v1elem > v2elem) {
return v2;
}
}
// Done iterating the common indexes and they are all equal
// The shorter string is therefore the minimal one:
if (v1parts.length < v2parts.length) {
return v1;
}
return v2;
}
Now that you have such a function, it's just a matter of iterating the key-value pairs and placing the minimal value in a Map. E.g. (pseudo-code assuming you have some sort of Pair class):
Map<String, String> minimums = new HashMap<>();
for (Pair<String, String> entry : myListOfPairs) {
String key = entry.getKey();
minimums.put(key, min(entry.getValue(), minimums.get(key));
}
An O(n) algorithm that parses the input once and keeps on updating the output map should suffice. I am assuming that the input is provided in the form of two lists with equal size.
public Map<String, String> groupAndFindMinimum(ArrayList< String > inputKeys, ArrayList< String > inputValues){
Map<String, String> output = new ArrayMap<String, String>();
int i = 0;
for(String key : inputKeys){
if(output.containsKey(key)){
output[key] = min(output[key], inputValues.get(i));
}
else{
output.insert(key, inputValues.get(i));
}
i++;
}
return output;
}
I hope this helps.
Your algorithm to find minimum is correct.
Apologies for the newbie question, but what's the proper way to get a Set (say LinkedHashSet) in reverse order? For Collections there's Collections.reverse(Collection c), but how does one do it for a Set with ordered elements (like a LinkedHashSet)?
Sets are not ordered in general, so to preserve the sorting, after sorting the set as a list, you would need to use a known iteration order implementation of Set, such as LinkedHashSet
List list = new ArrayList(set);
Collections.sort(list, Collections.reverseOrder());
Set resultSet = new LinkedHashSet(list);
You could also use TreeSet with a comparator, but that is not as fast as the ArrayList method above.
public class LargestArray {
public static void main(String[] args) {
ArrayList<Integer> al = new ArrayList<>();
Set<Integer> set = new TreeSet<>();
set.add(10);
set.add(20);
set.add(7);
set.add(4);
set.add(1);
set.add(2);
set.add(3);
set.add(4);
System.out.println("after Sorting");
for(int i : set) {
System.out.print(" " + i);
}
al.addAll(set);
set.clear();
Collections.reverse(al);
System.out.println();
System.out.println("After Reverse");
for (int i : al) {
System.out.print(" " + i);
}
}
}
output = after Sorting
1 2 3 4 7 10 20
After Reverse
20 10 7 4 3 2 1
Check this out
http://docs.oracle.com/javase/7/docs/api/java/util/TreeSet.html#descendingSet()
If you use a TreeSet you can get reverse order by calling descendingSet.
I will explain you with an example. Comments are added in mid of the code for better understanding.
public class ReverseLinkedHashSet {
public static void main(String[] args) {
// creating a LinkedHashSet object which is
// of type String or any. Will take a example of String.
HashSet<String> cars = new LinkedHashSet<String>();
// adding car elements to LinkedHashSet object as below
cars.add("Toyato");
cars.add("Hundai");
cars.add("Porshe");
cars.add("BMW");
// Iterating using enhanced for-loop to see the order.
System.out.println("Insertion Order: Iterating LinkedHashSet\n");
for(String car : cars) {
System.out.println(car);
// Output will be as below
//Toyato
//Hundai
//Porshe
//BMW
}
// Now convert to ArrayList to rearrange to reverse
// the linkedHashset
List<String> listOfCars = new ArrayList<String>(cars);
// to reverse LinkedHashSet contents
Collections.reverse(listOfCars);
// reverse order of LinkedHashSet contents
// can be done as below
System.out.println("\n\n\nReverse Order of LinkedHashSet\n");
for(String car : listOfCars) {
System.out.println(car);
// Output will be as below
//BMW
//Porshe
//Hundai
//Toyato
}
}
}
Also, I suggest not to use LinkedhashSet without a strong reason. For a complex application, it will reduce the performance. Use HashSet instead.
Java 8, I using solution below,
Set<String> setTest = new HashSet<>();
setTest.add("1");
setTest.add("2");
setTest.add("3");
List<String> list = new ArrayList<>(setTest);
list.sort(Collections.reverseOrder());
Set<String> result = new LinkedHashSet<>(list);
for (String item: result) {
System.out.println("---> " + item);
}
Result:
---> 3
---> 2
---> 1
Work for me.