delete from parent hashmap from a nested hashmap condition - java

I have the following structure
HashMap<String, HashMap<String, String>> h = new HashMap<>();
HashMap<String, String>> h1 = new HashMap<>();
h1.put("key10", "value10")
h1.put("key11", "value11")
h1.put("date", "2018-10-18T00:00:57.907Z")
h.put("1#100", h1)
HashMap<String, String>> h2 = new HashMap<>();
h2.put("key20", "value20")
h2.put("key21", "value21")
h2.put("date", "2023-02-03T10:00:00.907Z")
h.put("2#000", h2)
Imagine I have many entries like the examples above.
In certain moment (scheduler) i have this requirement:
check all nested hash maps (for each/stream)
see if date condition is true
find parent key and delete from main hash map
In this exemple the final hash map will be
h2.put("key20", "value20")
h2.put("key21", "value21")
h2.put("date", "2023-02-03T10:00:00.907Z")
h.put("2#000", h2)
h2 => {key20 => value20, key21 => value21, date => 2023-02-03T10:00:00.907Z}
i have this code right now
h.forEach((k,v) -> {
v.entrySet()
.stream()
.filter(e -> e.getKey().equals("date"))
.filter(t -> Timestamp.from(Instant.now()).getTime() - Timestamp.valueOf(t.getValue()).getTime() > milisDiff)
//need now to access parent and delete with by k key
Can do in one step (lambda) or i need to have extra structure to collect parent keys and after proceed to delete within for each ?

This may do what you want. Just filter out bad elements and assign to the same map.
HashMap<String, HashMap<String, String>> h = new HashMap<>();
HashMap<String, String> h1 = new HashMap<>();
h1.put("key10", "value10");
h1.put("key11", "value11");
h1.put("date", "2018-10-18T00:00:57.907Z");
h.put("1#100", h1);
HashMap<String, String> h2 = new HashMap<>();
h2.put("key20", "value20");
h2.put("key21", "value21");
h2.put("date", "2023-02-04T10:00:00.907Z");
h.put("2#000", h2);
// any instant after `now` will pass the filter and be put in the map
Predicate<String> check = str -> Instant.parse(str)
.isAfter(Instant.now());
h = h.entrySet().stream()
.filter(e -> check.test(e.getValue().get("date")))
.collect(Collectors.toMap(Entry::getKey, Entry::getValue,
(a,b)->a,
HashMap::new));
h.values().forEach(m -> {
m.entrySet().forEach(System.out::println);
});
prints
date=2023-02-04T10:00:00.907Z
key21=value21
key20=value20
My predicate simply deleted the date if it expired. Yours was a tighter threshold.
Updated
Here is another option in case building a new map takes too long. It uses an iterator to run thru the loop and modify the existing map by removing Maps with old dates.
Iterator<Entry<String,Map<String,String>>> it = h.entrySet().iterator();
while (it.hasNext()) {
Entry<String,Map<String, String>> e = it.next();
if (!check.test(e.getValue().get("date"))) {
it.remove();
}
}

Related

How to get a Set from a HashMap within a HashMap

I am trying to create a Set and Collection from a HashMap that is within another HashMap. The problem is that the getKey() method is not applicable.
HashMap<HashMap<String, String>, String> list = getList(issues);
Set <String> set1 = list.getKey().keySet();
Collection <String> set2 = list.getKey().values();
From your collection, to get all the keys of each entry, you have to loop over each entry and get the keys, to do that using Java-8+, you can use:
Set<String> allKeys = list.entrySet().stream()
.flatMap(e -> e.getKey().keySet().stream())
.collect(Collectors.toSet());
To get the value, you can do it like this:
List<String> allValues = list.entrySet().stream()
.flatMap(e -> e.getKey().values().stream())
.collect(Collectors.toList());
Get Keys
//Get Keys of childHashMap
Set<String> keys = parent.keySet().stream().
flatMap(child -> child.keySet().stream()).
collect(Collectors.toSet());
1.Stream on keySet of parent hashMap (Iterate on child hashmap)
parent.keySet().stream()
2.Return keySet of child hashmap (it's like {1},{2,3},{4,5})
child -> child.keySet().stream()
3.Flat the previous step Set (it's like convert {1},{2,3},{4,5} to {1,2,3,4,5})
flatMap(child -> child.keySet().stream()).
4.Return keys
collect(Collectors.toSet());
Get Values
//Get Values of childHashMap
Collection<String> values = parent.keySet().stream()
.map(HashMap::values)
.flatMap(Collection::stream)
.collect(Collectors.toList());
1.Stream keySet of parent hashMap (Iterate on child hashmap)
parent.keySet().stream()
2.Return values of the child hashMap (its like : {1},{2},{3})
map(HashMap::values)
3.Use of flatMap to flat the previous step (Convert {1},{2},{3} to {1,2,3})
flatMap(Collection::stream)
4.Collect values to a List
collect(Collectors.toList())
Sample
//Parent
HashMap<HashMap<String, String>, String> parent = new HashMap<>();
//Child HashMap 1
HashMap<String, String> childHashMap = new HashMap<>();
childHashMap.put("a", "b");
//Child HashMap2
HashMap<String, String> childHashMap2 = new HashMap<>();
childHashMap2.put("d", "e");
//Parent init
parent.put(childHashMap, "c");
parent.put(childHashMap2, "f");
//Get Keys
Set<String> keys = parent.keySet().stream().flatMap(child -> child.keySet().stream()).collect(Collectors.toSet());
//Get values
Collection<String> values = parent.keySet().stream().map(HashMap::values).flatMap(Collection::stream).collect(Collectors.toList());
//Print key and values
keys.forEach(System.out::println);
values.forEach(System.out::println);
Result =
keys = a ,d
values = e , b

Create or Update key value pairs in both maps

I've map of key value pairs with each value being a map of key value pairs.
Something like
Map<String, Map<String, Integer> outMap = new HashMap<>();
Map<String, Integer> inMap = new HashMap<>();
inMap.put("i11", 111);
inMap.put("i21", 121);
outMap.put("o1", inMap);
How would I handle the entry where I can create/update at both levels of the map using java 8 ?
Input would be outer key/inner key and value. So we should we able to add a new entry is it doesn't exist in outer map. If the entry exists in outer map then insert the new entry in inner map if it doesn't exist else update the inner map with new value.
What you want to achieve can be done with this single line of code:
outerMap.computeIfAbsent(outerKey, k -> new HashMap<>()).put(innerKey, value)
But without these methods, you can achieve the same with just get() and put():
Map<String, Integer> innerMap = outerMap.get(outerKey);
if (innerMap == null) {
innerMap = new HashMap<>();
outerMap.put(outerKey, innerMap);
}
innerMap.put(innerKey, value);
HOW TO UPDATE THE SINGLE-VALUE AND MULTIPLE VALUES AT THE SAME TIME SIMULTANEOUSLY IN TWO MAPS
NTOE:
givenMap.computIfAbsent(k,Funtion) -> if key in given map is null or absent, then compute the value using funtion and add into the given map
Map<String, Map<String, Integer>> outMap = new HashMap<>();
Map<String, Integer> inMap = new HashMap<>();
inMap.put("i11", 111);
inMap.put("i21", 121);
outMap.put("o1", inMap);
System.out.println(outMap.toString());
System.out.println(inMap.toString());
OUTPUT BEFORE UPDATING:
{o1={i11=111, i21=121}}
{i11=111, i21=121}
//If you want to add one value in the inner hashmap you created:
outMap.computeIfAbsent("newHashMapKey",k -> new HashMap<>()).put("Arpan",2345);
// if you want to add more than 1 value at a time in the inner hashmap
outMap.computeIfAbsent("newHashMapKey2",k -> new HashMap<>()).putAll(new HashMap<String, Integer>(){{
put("One", 1);
put("Two", 2);
put("Three", 3);
}});
System.out.println(outMap.toString());
System.out.println(inMap.toString());
OUTPUT AFTER UPDATING BOTH MAPS AT THE SAME TIME
{o1={i11=111, i21=121}, newHashMapKey2={Two=2, Three=3, One=1}, newHashMapKey={Arpan=2345}}
{i11=111, i21=121}

Convert java list to map using stream with indexes

I'm trying to learn how to use the Java 8 collections and I was wondering if there was a way to convert my list to a map using a java stream.
List<PrimaryCareDTO> batchList = new ArrayList<>();
PrimaryCareDTO obj = new PrimaryCareDTO();
obj.setProviderId("123");
obj.setLocatorCode("abc");
batchList.add(obj);
obj = new PrimaryCareDTO();
obj.setProviderId("456");
obj.setLocatorCode("def");
batchList.add(obj);
I'm wondering how I would go about creating my list above into a map using a stream. I know how to use the foreach etc with puts, but I was just wondering if there was a more elegant way to build the map using a stream. (I'm aware the syntax below is not correct, I'm new to streams and not sure how to write it)
AtomicInteger index = new AtomicInteger(0);
Map<String, Object> result = batchList.stream()
.map("providerId" + index.getAndIncrement(), PrimaryCareDTO::getProviderId)
.map("locatorCode" + index.get(), PrimaryCareDTO::getLocatorCode);
The goal is to represent the following.
Map<String, Object> map = new HashMap<>();
//Group a
map.put("providerId1", "123");
map.put("locatorCode1", "abc");
//Group b
map.put("providerId2", "456");
map.put("locatorCode2", "def");
...
import java.util.AbstractMap.SimpleImmutableEntry;
import java.util.Map.Entry;
...
AtomicInteger index = new AtomicInteger(0);
List<SimpleEntry<String, String>> providerIds =
batchList.stream()
.map(e -> new SimpleEntry<>("providerId" + index.incrementAndGet(), e.getProviderId()))
.collect(Collectors.toList());
index.set(0);
List<SimpleEntry<String, String>> locatorCodes =
batchList.stream()
.map(e -> new SimpleEntry<>("locatorCode" + index.incrementAndGet(), e.getLocatorCode()))
.collect(Collectors.toList());
Map<String, String> map = Stream.of(providerIds,
locatorCodes)
.flatMap(e -> e.stream())
.collect(Collectors.toMap(Entry::getKey, Entry::getValue));
First it creates two lists, using Entry (from Map) to represent String-String tuples:
list with tuples providerId# as 'key' with the values e.g. "123"
list with tuples locatorCode# as 'key' with the values e.g. "abc"
It then creates a stream containing these two lists as 'elements', which are then concatenated with flatMap() to get a single long stream of Entry,
(The reason the first two can't stay stream and I have to go through a List and back to stream is because the two invocations of index.incrementAndGet() would otherwise only be evaluated when the streams are consumed, which is after index.set(0);.)
It then creates new key-value pairs with the counter and puts them into a map (with Collectors.toMap().
You would have to steam twice as you want to add two of the properties to map
AtomicInteger index = new AtomicInteger(1);
Map<String, String> result1 = batchList.stream()
.collect(Collectors
.toMap(ignored -> "providerId" + index.getAndIncrement(), PrimaryCareDTO::getProviderId)
);
index.set(1);
Map<String, String> result2 = batchList.stream()
.collect(Collectors
.toMap(ignored -> "locatorCode" + index.getAndIncrement(), PrimaryCareDTO::getLocatorCode)
);
Map<String, String> result = new HashMap<>();
result.putAll(result1);
result.putAll(result2);

Iterate big hashmap in parallel

I have a linked hashmap which may contain upto 300k records at maximum. I want to iterate this map in parallel to improve the performance. The function iterates through the map of vectors and finds dot product of given vector against all the vectors in map. Also have one more check based on date value. And the function returns a nested hashmap. T
This is the code using iterator:
public HashMap<String,HashMap<String,Double>> function1(String key, int days) {
LocalDate date = LocalDate.now().minusDays(days);
HashMap<String,Double> ret = new HashMap<>();
HashMap<String,Double> ret2 = new HashMap<>();
OpenMapRealVector v0 = map.get(key).value;
for(Map.Entry<String, FixedTimeHashMap<OpenMapRealVector>> e: map.entrySet()) {
if(!e.getKey().equals(key)) {
Double d = v0.dotProduct(e.getValue().value);
d = Double.parseDouble(new DecimalFormat("###.##").format(d));
ret.put(e.getKey(),d);
if(e.getValue().date.isAfter(date)){
ret2.put(e.getKey(),d);
}
}
}
HashMap<String,HashMap<String,Double>> result = new HashMap<>();
result.put("dot",ret);
result.put("anomaly",ret2);
return result;
}
Update:
I looked into Java 8 streams, but I am running into CastException and Null pointer exceptions when using the parallel stream as this map is being modified else where.
Code:
public HashMap<String,HashMap<String,Double>> function1(String key, int days) {
LocalDate date = LocalDate.now().minusDays(days);
HashMap<String,Double> ret = new HashMap<>();
HashMap<String,Double> ret2 = new HashMap<>();
OpenMapRealVector v0 = map.get(key).value;
synchronized (map) {
map.entrySet().parallelStream().forEach(e -> {
if(!e.getKey().equals(key)) {
Double d = v0.dotProduct(e.getValue().value);
d = Double.parseDouble(new DecimalFormat("###.##").format(d));
ret.put(e.getKey(),d);
if(e.getValue().date.isAfter(date)) {
ret2.put(e.getKey(),d);
}
}
});
}
}
I have synchronized the map usage, but it still gives me the following errors:
java.util.concurrent.ExecutionException: java.lang.ClassCastException
Caused by: java.lang.ClassCastException
Caused by: java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
Also, I was thinking Should i split up the map into multiple pieces and run each using different threads in parallel?
You need to retrieve the Set<Map.Entry<K, V>> from the map.
Here's how you iterate on a Map using parallel Streams in Java8:
Map<String, String> myMap = new HashMap<> ();
myMap.entrySet ()
.parallelStream ()
.forEach (entry -> {
String key = entry.getKey ();
String value = entry.getValue ();
// here add whatever processing you wanna do using the key / value retrieved
// ret.put (....);
// ret2.put (....)
});
Clarification:
The maps ret and ret2 should be declared as ConcurrentHashMaps to allow the concurrent inserts / updates from multiple threads.
So the declaration of the 2 maps become:
Map<String,Double> ret = new ConcurrentHashMap<> ();
Map<String,Double> ret2 = new ConcurrentHashMap<> ();
One possible solution using Java 8 would be,
Map<String, Double> dotMap = map.entrySet().stream().filter(e -> !e.getKey().equals(key))
.collect(Collectors.toMap(Map.Entry::getKey, e -> Double
.parseDouble(new DecimalFormat("###.##").format(v0.dotProduct(e.getValue().value)))));
Map<String, Double> anomalyMap = map.entrySet().stream().filter(e -> !e.getKey().equals(key))
.filter(e -> e.getValue().date.isAfter(date))
.collect(Collectors.toMap(Map.Entry::getKey, e -> Double
.parseDouble(new DecimalFormat("###.##").format(v0.dotProduct(e.getValue().value)))));
result.put("dot", dotMap);
result.put("anomaly", anomalyMap);
Update
Here's much more elegant solution,
Map<String, Map<String, Double>> resultMap = map.entrySet().stream().filter(e -> !e.getKey().equals(key))
.collect(Collectors.groupingBy(e -> e.getValue().date.isAfter(date) ? "anomaly" : "dot",
Collectors.toMap(Map.Entry::getKey, e -> Double.parseDouble(
new DecimalFormat("###.##").format(v0.dotProduct(e.getValue().value))))));
Here we first group them based on anomaly or dot, and then use a downstream Collector to create a Map for each group. Also I have updated .filter() criteria based on the following suggestions.

Java 8 groupingby Into map that contains a list

I have the following data:
List<Map<String, Object>> products = new ArrayList<>();
Map<String, Object> product1 = new HashMap<>();
product1.put("Id", 1);
product1.put("number", "123");
product1.put("location", "ny");
Map<String, Object> product2 = new HashMap<>();
product2.put("Id", 1);
product2.put("number", "456");
product2.put("location", "ny");
Map<String, Object> product3 = new HashMap<>();
product3.put("Id", 2);
product3.put("number", "789");
product3.put("location", "ny");
products.add(product1);
products.add(product2);
products.add(product3);
I'm trying to stream over the products list, group by the id and for each id have a list on number, while returning a Map that contains three keys: Id, List of number, and a location.
So my output would be:
List<Map<String, Object>>> groupedProducts
map[0]
{id:1, number[123,456], location:ny}
map[1]
{id:2, number[789], location:ny}
I have tried:
Map<String, List<Object>> groupedProducts = products.stream()
.flatMap(m -> m.entrySet().stream())
.collect(groupingBy(Entry::getKey, mapping(Entry::getValue, toList())));
which prints:
{number=[123, 456, 789], location=[ny, ny, ny], Id=[1, 1, 2]}
I realise Map<String, List<Object>> is incorrect, but it's the best I could achieve to get the stream to work. Any feedback is appreciated.
In your case grouping by Id key with Collectors.collectingAndThen(downstream, finisher) could do the trick. Consider following example:
Collection<Map<String, Object>> finalMaps = products.stream()
.collect(groupingBy(it -> it.get("Id"), Collectors.collectingAndThen(
Collectors.toList(),
maps -> (Map<String, Object>) maps.stream()
.reduce(new HashMap<>(), (result, map) -> {
final List<Object> numbers = (List<Object>) result.getOrDefault("number", new ArrayList<>());
result.put("Id", map.getOrDefault("Id", result.getOrDefault("Id", null)));
result.put("location", map.getOrDefault("location", result.getOrDefault("location", null)));
if (map.containsKey("number")) {
numbers.add(map.get("number"));
}
result.put("number", numbers);
return result;
}))
)
)
.values();
System.out.println(finalMaps);
In the first step you group all maps with the same Id value to a List<Map<String,Object>> (this is what Collectors.toList() passed to .collectingAndThen() does). After creating that list "finisher" function is called - in this case we transform list of maps into a single map using Stream.reduce() operation - we start with an empty HashMap<String,Object> and we iterate over maps, take values from current map in iteration and we set values according to your specification ("Id" and "location" gets overridden, "number" keeps a list of values).
Output
[{number=[123, 456], location=ny, Id=1}, {number=[789], location=ny, Id=2}]
To make code more simple you can extract BiOperator passed to Stream.reduce to a method and use method reference instead. This function defines what does it mean to combine two maps into single one, so it is the core logic of the whole reduction.

Categories

Resources