I have a table in sql of doctor names and their clients
Each doctor has multiple clients
And one client can visit multiple do doctors
array and a simple table
[
{doctor="illies",client=4},
{doctor="illies",client=7},
{doctor="illies",client=1},
{doctor="houari",client=5},
{doctor="abdou",client=1},
{doctor="illies",client=2},
{doctor="abdou",client=1},
]
These data are already ordered So the task is To teach client know it's place in the queue
For example
The client with ID 1 Is in the third place in the doctor "illies"
And he's in the first place in the doctor "abdou"
I don't know if I explain it to you well A friend of mine suggest me to
Rearrange the array to a nested array like this (well this array is not totally correct but i has the idea)
[doctor="abdou" => clients=[cleint1="1",client2="2" ], doctor="illies"=>clients=[...] ]
now i just need an idea that could help me with my projet , all this work it to display the queue of the client (the position of the client in the doctor's queue), and thank you so much.
It seems that each row in the input array can be presented as a class like this:
class DocClient {
private String doctor;
private int client;
public String getDoctor() { return this.doctor; }
public int getClient() { return this.client; }
}
Then array or list of <DocClient> needs to be converted not into the "nested array" but into the map where doctor is used as a key, and the value is list of clients: Map<String, Integer> docClients.
This map can be conveniently built using Java Stream API using collectors Collectors.groupingBy and Collector.mapping:
List<DocClient> list = Arrays.asList(
new DocClient("illies", 4), new DocClient("illies", 4), new DocClient("illies", 1),
new DocClient("houari", 5), new DocClient("abdou", 1), new DocClient("illies", 2),
new DocClient("abdou", 2)
);
Map<String, List<Integer>> map = list
.stream()
.collect(Collectors.groupingBy(
DocClient::getDoctor, // use doctor as key via reference to getter
Collectors.mapping(
DocClient::getClient, // use `client` field
Collectors.toList() // convert to list
) // List<Integer> is value in map entry
));
// print the map
// map.forEach((doc, clients) -> System.out.printf("%s -> %s%n", doc. clients));
Related
I have two custom lists as follows.
List<OfficeName> = [{id: 1, offname: "Office1"}{id: 2, offname: "Office2"}]
List<OfficeLocation> = [{id: 1, offlocation: "location1"}{id: 2, offlocation: "locatio2"}]
I want result as follows:
list<OfficeDetails> = [{id: 1, offname: "Office1",offlocation: "location1" },
{id: 2, offname: "Office2", offlocation: "location2"}]
The first two lists needs to be joined on basis of "id" to give a new list which is equivalent to the join operation in sql tables.
My model classes are
public class OfficeName {
int id;
String offname;
//getter and setter
}
.................
public class OfficeLocation{
int id;
String offlocation;
//getter and setter
}
.........
Currently I am Iterating and manually adding as follows to a LinkedHashSet .
{
List<OfficeName> officeName = new ArrayList<OfficeName>();
onr.findById(id).forEach(officeName::add); // adding values from auto wired Repository
List<OfficeLocation> officeLocation = new ArrayList<OfficeLocation>();
olr.findById(id).forEach(officeLocation::add); // adding values from auto wired Repository
LinkedHashSet<LinkedHashSet<String>> lhs = new LinkedHashSet<LinkedHashSet<String> >();
OfficeName officeName1 = new OfficeName();
OfficeLocation officeLocation1 = new OfficeLocation();
Iterator<OfficeName> onIterator = officeName.iterator();
Iterator<OfficeLocation> olIterator = officeLocation.iterator();
while (onIterator.hasNext()) {
officeName1 =onIterator.next();
int idon =officeName1.getId();
while(olIterator.hasNext()){
officeLocation1 = olIterator.next();
int idol = officeLocation1.getId();
if(idon==idol)
{
lhs.add(new LinkedHashSet<String>(Arrays.asList( String.valueOf(officeName1.getId()),officeName1.getOffname(),officeLocation1.getOfflocation())));
olIterator.remove();
break;
}
};
}
I am not sure whether this is correct way to achieve the same as I am new to java. In C#, this could able to achieve through data tables. Please suggest whether there is any faster way?
Assuming both input lists:
Are distinct, with no duplicate id values in either, and…
Are complete, with a single object in both lists for each possible id value
… then we can get the work done with little code.
I use NavigableSet or SortedSet implementations to hold our input lists, the names and the locations. Though I have not verified, I assume being sorted will yield better performance when searching for a match across input collections.
To get the sorting done, we define a Comparator for each input collection: Comparator.comparingInt( OfficeName :: id ) & Comparator.comparingInt( OfficeLocation :: id ) where the double-colons make a method reference. To each NavigableSet we add the contents of our inputs, an unmodifiable list made with the convenient literals syntax of List.of.
To get the actual work done of joining these two input collections, we make a stream of either input collection. Then we produce a new object of our third joined class using inputs from each element of the stream plus its counterpart found via a stream of the other input collection. These newly produced objects of the third joined class are then collected into a list.
NavigableSet < OfficeName > officeNames = new TreeSet <>( Comparator.comparingInt( OfficeName :: id ) );
officeNames.addAll( List.of( new OfficeName( 1 , "Office1" ) , new OfficeName( 2 , "Office2" ) ) );
NavigableSet < OfficeLocation > officeLocations = new TreeSet <>( Comparator.comparingInt( OfficeLocation :: id ) );
officeLocations.addAll( List.of( new OfficeLocation( 1 , "location1" ) , new OfficeLocation( 2 , "locatio2" ) ) );
List < Office > offices = officeNames.stream().map( officeName -> new Office( officeName.id() , officeName.name() , officeLocations.stream().filter( officeLocation -> officeLocation.id() == officeName.id() ).findAny().get().location() ) ).toList();
Results:
officeNames = [OfficeName[id=1, name=Office1], OfficeName[id=2, name=Office2]]
officeLocations = [OfficeLocation[id=1, location=location1], OfficeLocation[id=2, location=locatio2]]
offices = [Office[id=1, name=Office1, location=location1], Office[id=2, name=Office2, location=locatio2]]
Our three classes, the two inputs and the third joined one, are all written as records here for their convenient brevity. This Java 16+ feature is a brief way to declare a class whose main purpose is to communicate data transparently and immutably. The compiler implicitly creates the constructor, getters, equals & hashCode, and toString. Note that a record can be defined locally as well as nested or separate.
public record OfficeName( int id , String name ) { }
public record OfficeLocation( int id , String location ) { }
public record Office( int id , String name , String location ) { }
Given the conditions outlined above, we could optimize by hand-writing loops to manage the matching of objects across the input collections, rather than using streams. But I would not be concerned about the performance impact unless you had huge amounts of data that had proven to be a bottleneck. Otherwise, using streams makes for less code and more fun.
One of the lists (e.g. locations) should be converted into a map (HashMap) by a key on which the joining should be made, in this case id field.
Then, assuming that OfficeDetails class has an all-args constructor, the resulting list may be retrieved by streaming the other list offices and mapping its contents into new OfficeDetails, filling the remaining location argument by looking up the map.
List<OfficeName> offices = Arrays.asList(
new OfficeName(1, "Office1"), new OfficeName(2, "Office2"), new OfficeName(3, "Office3")
);
List<OfficeLocation> locations = Arrays.asList(
new OfficeLocation(1, "Location 1"), new OfficeLocation(2, "Location 2"), new OfficeLocation(4, "Location 4")
);
Map<Integer, OfficeLocation> mapLoc = locations
.stream()
.collect(Collectors.toMap(
OfficeLocation::getId,
loc -> loc,
(loc1, loc2) -> loc1 // to resolve possible duplicates
));
List<OfficeDetails> details = offices
.stream()
.filter(off -> mapLoc.containsKey(off.getId())) // inner join
.map(off -> new OfficeDetails(
off.getId(), off.getOffname(),
mapLoc.get(off.getId()).getOfflocation() // look up the map
))
.collect(Collectors.toList());
details.forEach(System.out::println);
Output (assuming toString is implemented in OfficeDetails):
{id: 1, offname: "Office1", offlocation: "Location 1"}
{id: 2, offname: "Office2", offlocation: "Location 2"}
If offices list is not filtered by condition mapLoc.containsKey, an implementation of LEFT JOIN is possible (when null locations are stored in the resulting OfficeDetails).
To implement RIGHT JOIN (with null office names and all available locations), a lookup map should be created for offices, and main iteration has to be run for locations list.
To implement FULL JOIN (where either name or location parts of OfficeDetails can be null), two maps need to be created and then joined:
Map<Integer, OfficeName> mapOff = offices
.stream()
.collect(Collectors.toMap(
OfficeName::getId,
off -> off,
(off1, off2) -> off1, // to resolve possible duplicates
LinkedHashMap::new
));
List<OfficeDetails> fullDetails = Stream.concat(mapOff.keySet().stream(), mapLoc.keySet().stream())
.distinct()
.map(id -> new OfficeDetails(
id,
Optional.ofNullable(mapOff.get(id)).map(OfficeName::getOffname).orElseGet(()->null),
Optional.ofNullable(mapLoc.get(id)).map(OfficeLocation::getOfflocation).orElseGet(()->null)
))
.collect(Collectors.toList());
fullDetails.forEach(System.out::println);
Output:
{id: 1, offname: "Office1", offlocation: "Location 1"}
{id: 2, offname: "Office2", offlocation: "Location 2"}
{id: 3, offname: "Office3", offlocation: null}
{id: 4, offname: null, offlocation: "Location 4"}
I can use the below snippet to retrieve the name if there is 1 entry in the list by retrieving element 0 in the list, however, each NameResponse can have several names (e.g. a first name, a middle name and a surname). How can I retrieve x names associated with one customer? There could be 20 names for argument's sake. I would like to implement using a stream since I am using Java 8, but I am unsure how to implement this. Any suggestions?
private List<String> getNames(Customer customer) {
List<NameResponse> nameResponses = new ArrayList<>();
NameResponse nameResponse = new NameResponse();
nameResponse.setName("Test Name");
nameResponses.add(nameResponse);
customer.setNames(nameResponses);
return List.of(customer.getNames().get(0).getName());
}
Customer class:
private List<NameResponse> names;
NameResponse class:
private String name;
Something like below assuming you have the appropriate getters:
return customer.getNames()
.stream()
.map(NameResponse::getName)
.collect(Collectors.toList());
You could do that using the map operator on the stream and then collect to output a list:
return customer.getNames().stream()
.map(nameResponse -> nameResponse.getName())
.collect(Collectors.toList()));
I have the following problem:
I want to remove duplicate data from a list of a Vo depending if the registered field is the same, I show you the solution that I am trying. Then this is the data from the list that I am making
List<MyVo> dataList = new ArrayList<MyVo>();
MyVo data1 = new MyVo();
data1.setValidated(1);
data1.setName("Fernando");
data1.setRegistered("008982");
MyVo data2 = new MyVo();
data2.setValidated(0);
data2.setName("Orlando");
data2.setRegistered("008986");
MyVo data3 = new MyVo();
data3.setValidated(1);
data3.setName("Magda");
data3.setRegistered("008982");
MyVo data4 = new MyVo();
data4.setValidated(1);
data4.setName("Jess");
data4.setRegistered("006782");
dataList.add(data1);
dataList.add(data2);
dataList.add(data3);
dataList.add(data4);
The first thing I have to do and separate it into two different lists depending on whether the data is validated or not, for that the value of the registered validated.
List<MyVo> registeredBusinesses = new ArrayList<MyVo>();
List<MyVo> unregisteredBusinesses = new ArrayList<MyVo>();
for (MyVo map : dataList) {
if (map.getValidated == 0) {
unregisteredBusinesses.add(map);
}else {
registeredBusinesses.add(map);
}
}
now the list of registered businesses I want to remove the data that is repeated with the same value from its registered field and make a new list. this is what it took but it doesn't work right
List<MyVo> duplicateList = registeredBusinesses.stream().filter(distictByRegistered(MyVo::getRegistered)).collect(Collectors.toList());
public static <T> Predicate<T> distictByRegistered(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> seen.add(keyExtractor.apply(t));
}
however using this method I get the following output:
{["validated":1,"name":"Fernando","registered":"008982"],
["validated":1,"name":"Jess","registered":"006782"]}
the output I want to obtain is the following:
the unregisteredBusinesses list:
{["validated":0,"name":"Orlando","registered":"008986"]}
the registeredBusinesses list:
{["validated":1,"name":"Jess","registered":"006782"]}
the registeredDuplicateBusinesses list:
{["validated":1,"name":"Fernando","registered":"008982"],
["validated":1,"name":"Magda","registered":"008982"]}
I don't know how to do it, could you help me? I would like to use lambdas to reduce the code, for example of the first for when I separate into two lists
You are looking for both registered and unregistered businesses. This is where instead of making use of 0 and 1, you could choose to implement the attribute as a boolean isRegistered such as 0 is false and 1 is true going forward. Your existing code with if-else could be re-written as :
Map<Boolean, List<MyVo>> partitionBasedOnRegistered = dataList.stream()
.collect(Collectors.partitioningBy(MyVo::isRegistered));
List<MyVo> unregisteredBusinesses = partitionBasedOnRegistered.get(Boolean.FALSE); // here
List<MyVo> registeredBusinesses = partitionBasedOnRegistered.get(Boolean.TRUE);
Your approach looks almost correct, grouping by Function.identity() will properly flag duplicates (based on equals() implementation!), you could also group by an unique property/id in your object if you have one, what you're missing is to manipulate the resulting map to get a list with all duplicates. I've added comments describing what's happening here.
List<MyVo> duplicateList = registeredBusinesses.stream()
.collect(Collectors.groupingBy(Function.identity()))
.entrySet()
.stream()
.filter(e -> e.getValue().size() > 1) //this is a stream of Map.Entry<MyVo, List<MyVo>>, then we want to check value.size() > 1
.map(Map.Entry::getValue) //We convert this into a Stream<List<MyVo>>
.flatMap(Collection::stream) //Now we want to have all duplicates in the same stream, so we flatMap it using Collections::stream
.collect(Collectors.toList()); //On this stage we have a Stream<MyVo> with all duplicates, so we can collect it to a list.
Additionally, you could also use stream API to split dataList into registered and unRegistered.
First we create a method isUnregistered in MyVo
public boolean isUnregistered() {
return getrRegistered() == 0;
}
Then
Map<Boolean, List<MyVo>> registeredMap = dataList.stream().collect(Collectors.groupingBy(MyVo::isUnregistered));
Where map.get(true) will be unregisteredBusinesses and map.get(false) registeredBusinesses
Familiarizing yourself with the concept of the Collectors.partitioningBy shall help you problem-solve this further. There are two places amongst your current requirement where it could be implied.
You are looking for both registered and unregistered businesses. This is where instead of making use of 0 and 1, you could choose to implement the attribute as a boolean isRegistered such as 0 is false and 1 is true going forward. Your existing code with if-else could be re-written as :
Map<Boolean, List<MyVo>> partitionBasedOnRegistered = dataList.stream()
.collect(Collectors.partitioningBy(MyVo::isRegistered));
List<MyVo> unregisteredBusinesses = partitionBasedOnRegistered.get(Boolean.FALSE); // here
List<MyVo> registeredBusinesses = partitionBasedOnRegistered.get(Boolean.TRUE);
After you try to groupBy the registered businesses based on the registration number(despite of identity), you require both the duplicate elements and the ones which are unique as well. Effectively all entries, but again partitioned into two buckets, i.e. one with value size == 1 and others with size > 1. Since grouping would ensure, minimum one element corresponding to each key, you can collect the required output with an additional mapping.
Map<String, List<MyVo>> groupByRegistrationNumber = // group registered businesses by number
Map<Boolean, List<List<MyVo>>> partitionBasedOnDuplicates = groupByRegistrationNumber
.entrySet().stream()
.collect(Collectors.partitioningBy(e -> e.getValue().size() > 1,
Collectors.mapping(Map.Entry::getValue, Collectors.toList())));
If you access the FALSE values of the above map, that would provide you the groupedRegisteredUniqueBusiness and on the other hand values against TRUE key would provide you groupedRegisteredDuplicateBusiness.
Do take a note, that if you were to flatten this List<List<MyVo> in order to get List<MyVo> as output, you could also make use of the flatMapping collector which has a JDK inbuilt implementation with Java-9 and above.
I've got a List<String> which represents the ID's (can be duplicate), of items from another list, which is a List<Cheat>, where each Cheat has a String ID and a List<Integer> RNG. Both have accessor methods in Cheat.
I need to convert this list of ID's, into a list of RNG's for each Cheat that I have been supplied with the ID for.
For example, I could have 3 Cheats:
1:{ID:1, RNG:{1,2,3}}
2:{ID:2, RNG{1,2}}
3:{ID:3, RNG:{1}}
And a List of ID's of:
{3,1,1,2}.
I would need to end up with a final list of {1,1,2,3,1,2,3,1,2}, which is the RNG's of Cheat 3, then the RNG's of cheat 1, then the RNG's of cheat 1 again, then finally the RNG's of cheat 2.
If anyone could help me out it would be appreciated. Thank you.
I've tried and failed with:
ImmutableList<Integer> sequenceRngs = cheatIds.stream()
.map(s -> cheats.stream()
.filter(cheat -> cheat.getId().equals(s))
.findFirst()
.map(cheat -> cheat.getRng()))
.flatMap(cheat -> cheat.getRng())
.collect(ListUtils.toImmutableList());
One possible solution:
import java.util.List;
import java.util.stream.Collectors;
class Scratch {
static class Cheat {
int id;
List<Integer> rng;
public Cheat(int id, List<Integer> rng) {
this.id = id;
this.rng = rng;
}
}
public static void main(String[] args) {
List<Cheat> allCheats = List.of(
new Cheat(1, List.of(1,2,3)),
new Cheat(2, List.of(1,2)),
new Cheat(3, List.of(1))
);
List<Integer> result = List.of(3, 1, 1, 2).stream()
.flatMap(id -> allCheats.stream()
.filter(cheat -> cheat.id == id)
.findFirst().orElseThrow().rng.stream())
.collect(Collectors.toList());
System.out.println(result);
}
}
The key is to use flatMap to get the result in a single - not nested - Collection in the end.
The lambda that you pass to flatMap should return a Stream, not a List. And you should handle the case where there's no such element in the stream - even if you are sure there is. Something like this should do:
final ImmutableList<String> sequenceRngs = cheatIds.stream().flatMap(id ->
cheats.stream().filter(cheat -> id.equals(cheat.getId()))
.findAny().orElseThrow(IllegalStateException::new)
.getRng().stream())
.collect(ListUtils.toImmutableList());
Also, I would propose to convert the list of cheats to a map - that would simplify the code and reduce the complexity of searching from O(n) to O(1).
You can attain that with the following steps:
Create a map of cheatId to RNG ids associated:
Map<Integer, List<Integer>> map = cheats.stream()
.collect(Collectors.toMap(Cheat::getId,
cheat -> cheat.getRng().stream().map(RNG::getId).collect(Collectors.toList())));
Iterate over the cheatIds provided as input and get the corresponding RNG ids from the map to collect as output:
List<Integer> output = cheatIds.stream()
.flatMap(ch -> map.get(ch).stream())
.collect(Collectors.toList());
I am trying to learn how to use the lambda functions for sleeker code but struggling to make this work.
I have two lists. The "old" list is always shorter or the same length as the "updated list".
I want to take the objects from the "updated list" and overwrite the "stale objects" in the shorter "old list".
The lists have a unique field for each object.
For example, it is a bit like updating books in a library with new editions. The UUID (title+author) remains the same but the new object replaces the old on the shelf with a new book/object.
I know I could do it the "long way" and make a HashMap<MyUniqueFieldInMyObject, MyObject> and then take the new List<MyUpdatedObjects> and do the same.
I.e. Have HashMap<UniqueField, MyOldObject> and HashMap<UniqueField, MyUpdatedObject>, then iterate over the old objects with a pseudo "if updated objects have an entry with the same key, overwrite the value with the updated value"...
But...
Is there a "nicer" shorted way to do this with functional lambda statements?
I was thinking along the lines of:
List<MyObject> updatedList;
List<MyObject> oldList;
updatedList.forEach(MyObject -> {
String id = MyObject.getId();
if (oldList.stream().anyMatcher(MyObject ->
MyObject.getId().matches(id)) {
//Do the replacement here? If so...how?
}
}
Which is where I am lost!
Thanks for any guidance.
If you want to update the list in place rather than making a new list, you can use List.replaceAll:
oldList.replaceAll(old ->
updateListe.stream()
.filter(updated -> updated.getId().equals(old.getId())
.findFirst()
.orElse(old)
);
The main problem with this solution is that its complexity is O(size-of-old*size-of-updated). The approach you described as "long way" can protect you from having to iterate over the entire updated list for every entry in the old list:
// note that this will throw if there are multiple entries with the same id
Map<String, MyObject> updatedMap = updatedList.stream()
.collect(toMap(MyObject::getId, x->x));
oldList.replaceAll(old -> updatedMap.getOrDefault(old.getId(), old));
I recommend you to iterate over the oldList - the one you want to update. For each of the object iterated match the equivalent one by its id and replace it using Stream::map. If an object is not found, replace it with self (doesn't change the object) using Optional::orElse.
List<MyObject> newList = oldList
.stream() // Change values with map()
.map(old -> updatedList.stream() // Iterate each to find...
.filter(updated -> old.getId() == updated.getId()) // ...by the same id
.findFirst() // Get new one to replace
.orElse(old)) // Else keep the old one
.collect(Collectors.toList()); // Back to List
List<Foo> updatedList = List.of(new Foo(1L, "new name", "new desc."));
List<Foo> oldList = List.of(new Foo(1L, "old name", "old desc."));
List<Foo> collect = Stream.concat(updatedList.stream(), oldList.stream())
.collect(collectingAndThen(toMap(Foo::getId, identity(), Foo::merge),
map -> new ArrayList(map.values())));
System.out.println(collect);
This will print out:
[Foo{id=1, name='new name', details='old desc.'}]
In Foo::merge you can define which fields need update:
class Foo {
private Long id;
private String name;
private String details;
/*All args constructor*/
/*getters*/
public static Foo merge(Foo newFoo, Foo oldFoo) {
return new Foo(oldFoo.id, newFoo.name, oldFoo.details);
}
}
I think it's best to add the objects to be updated into a new list to avoid changing a list you are streaming on and then you can simply replace the old with the new list
private List<MyObject> update(List<MyObject> updatedList, List<MyObject> oldList) {
List<MyObject> newList = new ArrayList<>();
updatedList.forEach(object -> {
if (oldList.stream().anyMatch(old -> old.getUniqueId().equals(object.getUniqueId()))) {
newList.add(object);
}
}
return newList;
}