Updating Values in Map on the basis of other map in Java - java

Map<String, String> map1 = new HashMap<>();
map1.put("k1", "v1");
map1.put("k2", "v2");
map1.put("k3", "v3");
Map<String, String> map2 = new HashMap<>();
map2.put("v1", "val1");
map2.put("v2", "val2");
map2.put("v3", "vav3");
I want to update values of map1 so that it has entries:
"k1" , "val1",
"k2" , "val2",
"k3" , "val3"
My solution:
for (Map.Entry<String, String> entry : map1.entrySet()) {
map1.put(entry.getKey(), map2.get(entry.getValue()));
}
Is there any better way to do this?
Edit: I am using Java 7 but curious to know if there any better way in Java 8.

Starting with Java 8, you can just have
map1.replaceAll((k, v) -> map2.get(v));
replaceAll(function) will replace all values from the map map1 with the result of applying the given function. In this case, the function simply retrieves the value from map2.
Note that this solution has the same issues that your initial code: if map2 doesn't have a corresponding mapping, null will be returned. You may want to call getOrDefault to have a default value in that case.
public static void main(String[] args) {
Map<String, String> map1 = new HashMap<>();
map1.put("k1", "v1");
map1.put("k2", "v2");
map1.put("k3", "v3");
Map<String, String> map2 = new HashMap<>();
map2.put("v1", "val1");
map2.put("v2", "val2");
map2.put("v3", "val3");
map1.replaceAll((k, v) -> map2.get(v));
System.out.println(map1); // prints "{k1=val1, k2=val2, k3=val3}"
}

For Java 7 there is nothing more that you can do, you are already doing it in the best way possible.
I'm adding this answer as a reference to show that for such case using Lambda Expressions in Java 8 will be even worst. See this example:
public static void main(String[] args) {
Map<String, String> map1 = new HashMap<>();
final Map<String, String> map2 = new HashMap<>();
for ( int i=0; i<100000; i++ ){
map1.put("k"+i, "v"+i);
map2.put("v"+i, "val"+i);
}
long time;
long prev_time = System.currentTimeMillis();
for (Map.Entry<String, String> entry : map1.entrySet()) {
map1.put(entry.getKey(), map2.get(entry.getValue()));
}
time = System.currentTimeMillis() - prev_time;
System.out.println("Time after for loop " + time);
map1 = new HashMap<>();
for ( int i=0; i<100000; i++ ){
map1.put("k"+i, "v"+i);
}
prev_time = System.currentTimeMillis();
map1.replaceAll((k, v) -> map2.get(v));
time = System.currentTimeMillis() - prev_time;
System.out.println("Time after for loop " + time);
}
The output for this will be:
Time after for loop 40
Time after for loop 100
The second loop is variable but always bigger than the first one.
I'm not Lambda specialist but I guess that there are more to be processed with it than a plain "foreach" of the first scenario
Running this test case over and over you will get for lambda almost always twice the time of the first "foreach" case.

in Java 8 you can write:
map1.entrySet()
.stream()
.map(entry -> new SimpleEntry(entry.getKey(), map2.get(entry.getValue())))
.collect(Collectors.toMap(entry -> entry.getKey(), entry.getValue()));
Not the nicest thing, though, but still a non-mutating solution.

Related

delete from parent hashmap from a nested hashmap condition

I have the following structure
HashMap<String, HashMap<String, String>> h = new HashMap<>();
HashMap<String, String>> h1 = new HashMap<>();
h1.put("key10", "value10")
h1.put("key11", "value11")
h1.put("date", "2018-10-18T00:00:57.907Z")
h.put("1#100", h1)
HashMap<String, String>> h2 = new HashMap<>();
h2.put("key20", "value20")
h2.put("key21", "value21")
h2.put("date", "2023-02-03T10:00:00.907Z")
h.put("2#000", h2)
Imagine I have many entries like the examples above.
In certain moment (scheduler) i have this requirement:
check all nested hash maps (for each/stream)
see if date condition is true
find parent key and delete from main hash map
In this exemple the final hash map will be
h2.put("key20", "value20")
h2.put("key21", "value21")
h2.put("date", "2023-02-03T10:00:00.907Z")
h.put("2#000", h2)
h2 => {key20 => value20, key21 => value21, date => 2023-02-03T10:00:00.907Z}
i have this code right now
h.forEach((k,v) -> {
v.entrySet()
.stream()
.filter(e -> e.getKey().equals("date"))
.filter(t -> Timestamp.from(Instant.now()).getTime() - Timestamp.valueOf(t.getValue()).getTime() > milisDiff)
//need now to access parent and delete with by k key
Can do in one step (lambda) or i need to have extra structure to collect parent keys and after proceed to delete within for each ?
This may do what you want. Just filter out bad elements and assign to the same map.
HashMap<String, HashMap<String, String>> h = new HashMap<>();
HashMap<String, String> h1 = new HashMap<>();
h1.put("key10", "value10");
h1.put("key11", "value11");
h1.put("date", "2018-10-18T00:00:57.907Z");
h.put("1#100", h1);
HashMap<String, String> h2 = new HashMap<>();
h2.put("key20", "value20");
h2.put("key21", "value21");
h2.put("date", "2023-02-04T10:00:00.907Z");
h.put("2#000", h2);
// any instant after `now` will pass the filter and be put in the map
Predicate<String> check = str -> Instant.parse(str)
.isAfter(Instant.now());
h = h.entrySet().stream()
.filter(e -> check.test(e.getValue().get("date")))
.collect(Collectors.toMap(Entry::getKey, Entry::getValue,
(a,b)->a,
HashMap::new));
h.values().forEach(m -> {
m.entrySet().forEach(System.out::println);
});
prints
date=2023-02-04T10:00:00.907Z
key21=value21
key20=value20
My predicate simply deleted the date if it expired. Yours was a tighter threshold.
Updated
Here is another option in case building a new map takes too long. It uses an iterator to run thru the loop and modify the existing map by removing Maps with old dates.
Iterator<Entry<String,Map<String,String>>> it = h.entrySet().iterator();
while (it.hasNext()) {
Entry<String,Map<String, String>> e = it.next();
if (!check.test(e.getValue().get("date"))) {
it.remove();
}
}

update existing key of HashMap

I have HashMap < Integer,String> of length 3:
1=>"Value1"
2=>"Value2"
3=>"Value3"
Now I want to decrease all keys by 1(if key>1):
Output:
1=>"Value2"
2=>"Value3"
What I am trying
for (e in hashMap.entries) {
val entry = e as Map.Entry<*, *>
var keyPos = (entry.key as Int)
if (keyPos != -1) {
if (keyPos > 1) {
keyPos = keyPos - 1
if (keyPos != -1) {
hashMap.put(keyPos, entry.value as String?)
}
}
}
}
But its not giving required output.
How to make it work without Concurrency exception.
An alternative is to use mapKeys extension function, which allows you to redefine the key for a map entry:
fun main() {
val originalMap = mapOf(1 to "value1", 2 to "value2", 3 to "value3")
val updatedMap = originalMap
.mapKeys {
if (it.key > 1) {
it.key - 1
} else {
it.key
}
}
println(updatedMap) // prints: {1=value2, 2=value3}
}
Note that this will not update the map in-place, but it will create a new one. Also note that:
In case if any two entries are mapped to the equal keys, the value of the latter one will overwrite the value associated with the former one.
This means that in case two keys are conflicting, in general you can't know which one will "win" (unless you're using a LinkedHashMap, which preserves insertion order).
A more general approach would be to:
decrement all keys
filter out all non-positive keys
This will require 2 full iterations, though, (unless you use Sequences, which are lazily evaluated):
fun main() {
val originalMap = mapOf(1 to "value1", 2 to "value2", 3 to "value3")
val updatedMap = originalMap
.mapKeys {
it.key - 1
}.filter {
it.key > 0
}
println(updatedMap)
}
EDIT: here is the same with Java 7 compatible code (without streams)
HashMap<Integer, String> hashMap = new HashMap<>();
hashMap.put(1, "test1");
hashMap.put(2, "test2");
hashMap.put(3, "test3");
Map<Integer, String> yourNewHashMap = new HashMap<>();
for (final Map.Entry<Integer, String> entry : hashMap.entrySet()) {
if (entry.getKey() != 1) { // make sure index 1 is omitted
yourNewHashMap.put(entry.getKey() - 1, entry.getValue()); // decrease the index for each key/value pair and add it to the new map
}
}
Old answer with streams:
As a new Map Object is okay for you, I would go with the following stream:
comments are inline
HashMap<Integer, String> hashMap = new HashMap<>();
hashMap.put(1, "test1");
hashMap.put(2, "test2");
hashMap.put(3, "test3");
// use this
Map<Integer, String> yourNewHashMap = hashMap.entrySet().stream()
.filter(es -> es.getKey() != 1) // make sure index 1 is omitted
.map(es -> new AbstractMap.SimpleEntry<Integer, String>(es.getKey() - 1, es.getValue())) // decrease the index for each key/value pair
.collect(Collectors.toMap(AbstractMap.SimpleEntry::getKey, AbstractMap.SimpleEntry::getValue)); // create a new map
public static void main(String[] args) {
HashMap<Integer, String> map = new HashMap<>();
// Populate the HashMap
map.put(1, "Value1");
map.put(2, "Value2");
map.put(3, "Value3");
System.out.println("Original HashMap: "
+ map);
decreaseAllKeysByOne(map);
}
private static void decreaseAllKeysByOne(HashMap<Integer, String> map) {
// Add your condition (if key>1)
HashMap<Integer, String> newMap = new HashMap<>();
map.remove(1);
Iterator<Map.Entry<Integer, String>> iterator = map.entrySet().iterator();
int i = 1;
while (iterator.hasNext()) {
Map.Entry<Integer, String> entry = iterator.next();
newMap.put(i, entry.getValue());
i++;
}
System.out.println("Modified HashMap: "
+ newMap);
}
Output :
Original HashMap: {1=Value1, 2=Value2, 3=Value3}
Modified HashMap: {1=Value2, 2=Value3}

Iterate big hashmap in parallel

I have a linked hashmap which may contain upto 300k records at maximum. I want to iterate this map in parallel to improve the performance. The function iterates through the map of vectors and finds dot product of given vector against all the vectors in map. Also have one more check based on date value. And the function returns a nested hashmap. T
This is the code using iterator:
public HashMap<String,HashMap<String,Double>> function1(String key, int days) {
LocalDate date = LocalDate.now().minusDays(days);
HashMap<String,Double> ret = new HashMap<>();
HashMap<String,Double> ret2 = new HashMap<>();
OpenMapRealVector v0 = map.get(key).value;
for(Map.Entry<String, FixedTimeHashMap<OpenMapRealVector>> e: map.entrySet()) {
if(!e.getKey().equals(key)) {
Double d = v0.dotProduct(e.getValue().value);
d = Double.parseDouble(new DecimalFormat("###.##").format(d));
ret.put(e.getKey(),d);
if(e.getValue().date.isAfter(date)){
ret2.put(e.getKey(),d);
}
}
}
HashMap<String,HashMap<String,Double>> result = new HashMap<>();
result.put("dot",ret);
result.put("anomaly",ret2);
return result;
}
Update:
I looked into Java 8 streams, but I am running into CastException and Null pointer exceptions when using the parallel stream as this map is being modified else where.
Code:
public HashMap<String,HashMap<String,Double>> function1(String key, int days) {
LocalDate date = LocalDate.now().minusDays(days);
HashMap<String,Double> ret = new HashMap<>();
HashMap<String,Double> ret2 = new HashMap<>();
OpenMapRealVector v0 = map.get(key).value;
synchronized (map) {
map.entrySet().parallelStream().forEach(e -> {
if(!e.getKey().equals(key)) {
Double d = v0.dotProduct(e.getValue().value);
d = Double.parseDouble(new DecimalFormat("###.##").format(d));
ret.put(e.getKey(),d);
if(e.getValue().date.isAfter(date)) {
ret2.put(e.getKey(),d);
}
}
});
}
}
I have synchronized the map usage, but it still gives me the following errors:
java.util.concurrent.ExecutionException: java.lang.ClassCastException
Caused by: java.lang.ClassCastException
Caused by: java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
Also, I was thinking Should i split up the map into multiple pieces and run each using different threads in parallel?
You need to retrieve the Set<Map.Entry<K, V>> from the map.
Here's how you iterate on a Map using parallel Streams in Java8:
Map<String, String> myMap = new HashMap<> ();
myMap.entrySet ()
.parallelStream ()
.forEach (entry -> {
String key = entry.getKey ();
String value = entry.getValue ();
// here add whatever processing you wanna do using the key / value retrieved
// ret.put (....);
// ret2.put (....)
});
Clarification:
The maps ret and ret2 should be declared as ConcurrentHashMaps to allow the concurrent inserts / updates from multiple threads.
So the declaration of the 2 maps become:
Map<String,Double> ret = new ConcurrentHashMap<> ();
Map<String,Double> ret2 = new ConcurrentHashMap<> ();
One possible solution using Java 8 would be,
Map<String, Double> dotMap = map.entrySet().stream().filter(e -> !e.getKey().equals(key))
.collect(Collectors.toMap(Map.Entry::getKey, e -> Double
.parseDouble(new DecimalFormat("###.##").format(v0.dotProduct(e.getValue().value)))));
Map<String, Double> anomalyMap = map.entrySet().stream().filter(e -> !e.getKey().equals(key))
.filter(e -> e.getValue().date.isAfter(date))
.collect(Collectors.toMap(Map.Entry::getKey, e -> Double
.parseDouble(new DecimalFormat("###.##").format(v0.dotProduct(e.getValue().value)))));
result.put("dot", dotMap);
result.put("anomaly", anomalyMap);
Update
Here's much more elegant solution,
Map<String, Map<String, Double>> resultMap = map.entrySet().stream().filter(e -> !e.getKey().equals(key))
.collect(Collectors.groupingBy(e -> e.getValue().date.isAfter(date) ? "anomaly" : "dot",
Collectors.toMap(Map.Entry::getKey, e -> Double.parseDouble(
new DecimalFormat("###.##").format(v0.dotProduct(e.getValue().value))))));
Here we first group them based on anomaly or dot, and then use a downstream Collector to create a Map for each group. Also I have updated .filter() criteria based on the following suggestions.

how to merge more than one hashmaps also sum the values of same key in java

ı am trying to merge more than one hashmaps also sum the values of same key,
ı want to explain my problem with toy example as follows
HashMap<String, Integer> m = new HashMap<>();
HashMap<String, Integer> m2 = new HashMap<>();
m.put("apple", 2);
m.put("pear", 3);
m2.put("apple", 9);
m2.put("banana", 6);
ı tried putall
m.putAll(m2);
output is as follows
{banana=6, apple=9, pear=3}
but its result is not true for this problem.
ı want to output as
{banana=6, apple=11, pear=3}
how can ı get this result in java?
If you are using Java 8, you can use the new merge method of Map.
m2.forEach((k, v) -> m.merge(k, v, (v1, v2) -> v1 + v2));
This is a very nice use case for Java 8 streams. You can concatentate the streams of entries and then collect them in a new map:
Map<String, Integer> combinedMap = Stream.concat(m1.entrySet().stream(), m2.entrySet().stream())
.collect(Collectors.groupingBy(Map.Entry::getKey,
Collectors.summingInt(Map.Entry::getValue)));
There are lots of nice things about this solution, including being able to make it parallel, expanding to as many maps as you want and being able to trivial filter the maps if required. It also does not require the orginal maps to be mutable.
This method should do it (in Java 5+)
public static <K> Map<K, Integer> mergeAndAdd(Map<K, Integer>... maps) {
Map<K, Integer> result = new HashMap<>();
for (Map<K, Integer> map : maps) {
for (Map.Entry<K, Integer> entry : map.entrySet()) {
K key = entry.getKey();
Integer current = result.get(key);
result.put(key, current == null ? entry.getValue() : entry.getValue() + current);
}
}
return result;
}
Here's my quick and dirty implementation:
import java.util.HashMap;
import java.util.Map;
public class MapMerger {
public static void main(String[] args) {
HashMap<String, Integer> m = new HashMap<>();
HashMap<String, Integer> m2 = new HashMap<>();
m.put("apple", 2);
m.put("pear", 3);
m2.put("apple", 9);
m2.put("banana", 6);
final Map<String, Integer> result = (new MapMerger()).mergeSumOfMaps(m, m2);
System.out.println(result);
}
public Map<String, Integer> mergeSumOfMaps(Map<String, Integer>... maps) {
final Map<String, Integer> resultMap = new HashMap<>();
for (final Map<String, Integer> map : maps) {
for (final String key : map.keySet()) {
final int value;
if (resultMap.containsKey(key)) {
final int existingValue = resultMap.get(key);
value = map.get(key) + existingValue;
}
else {
value = map.get(key);
}
resultMap.put(key, value);
}
}
return resultMap;
}
}
Output:
{banana=6, apple=11, pear=3}
There are some things you should do (like null checking), and I'm not sure if it's the fastest. Also, this is specific to integers. I attempted to make one using generics of the Number class, but you'd need this method for each type (byte, int, short, longer, etc)
ı improve Lucas Ross's code. in stead of enter map by one by in function ı give all maps one times to function with arraylist of hashmap like that
public HashMap<String, Integer> mergeAndAdd(ArrayList<HashMap<String, Integer>> maplist) {
HashMap<String, Integer> result = new HashMap<>();
for (HashMap<String, Integer> map : maplist) {
for (Map.Entry<String, Integer> entry : map.entrySet()) {
String key = entry.getKey();
Integer current = result.get(key);
result.put(key, current == null ? entry.getValue() : entry.getValue() + current);
}
}
return result;
}
}
it works too. thanks to everbody
Assume that you have many HashMaps: Map<String,Integer> map1, map2, map3;
Then you can use Java 8 streams:
Map<String,Integer> combinedMap = Stream.of(map1, map2, map3)
.flatMap(map -> map.entrySet().stream())
.collect(Collectors.groupingBy(Map.Entry::getKey,
Collectors.summingInt(Map.Entry::getValue)));
If the key exists, add to it's value. If not insert.
Here is a simple example which merges one map into another:
Foo oldVal = map.get(key);
if oldVal == null
{
map2.put(key, newVal);
}
else
{
map2.put(key, newVal + oldVal);
}
Obviously you have to loop over the first map so you can process all of it's entries but that's trivial.
Something like this should work:
for (Map.Entry<String, Integer> entry : map.entrySet()) {
String map1_key = entry.getKey();
int map1_value = entry.getValue();
//check:
if(map2.get(map1_key)!=null){
int map2_value = map2.get(map1_key);
//merge:
map3.put(map1_key,map1_value+map2_value);
}else{
map3.put(map1_key,map1_value);
}
}
for (Map.Entry<String, Integer> entry2 : map2.entrySet()) {
String map2_key = entry2.getKey();
int map2_value = entry2.getValue();
//check:
if(map1.get(map2_key)!=null){
int map1_value = map1.get(map2_key);
//merge:
map3.put(map2_key,map1_value+map2_value);
}else{
map3.put(map2_key,map2_value);
}
}

What's the best way to sum two Map<String,String>?

I have the following maps.
Map<String,String> map1= new HashMap<String, String>(){{
put("no1","123"); put("no2","5434"); put("no5","234");}};
Map<String,String> map1 = new HashMap<String, String>(){{
put("no1","523"); put("no2","234"); put("no3","234");}};
sum(map1, map2);
I want to join them to one, summing up similar keyed values together. What;s the best way I could do it using java 7 or guava libraries ?
expected output
Map<String, String> output = { { "no1" ,"646"}, { "no2", "5668"}, {"no5","234"}, {"no3","234" } }
private static Map<String, String> sum(Map<String, String> map1, Map<String, String> map2) {
Map<String, String> result = new HashMap<String, String>();
result.putAll(map1);
for (String key : map2.keySet()) {
String value = result.get(key);
if (value != null) {
Integer newValue = Integer.valueOf(value) + Integer.valueOf(map2.get(key));
result.put(key, newValue.toString());
} else {
result.put(key, map2.get(key));
}
}
return result;
}
try this
Map<String, List<String>> map3 = new HashMap<String, List<String>>();
for (Entry<String, String> e : map1.entrySet()) {
List<String> list = new ArrayList<String>();
list.add(e.getValue());
String v2 = map2.remove(e.getKey());
if (v2 != null) {
list.add(v2);
}
map3.put(e.getKey(), list);
}
for (Entry<String, String> e : map2.entrySet()) {
map3.put(e.getKey(), new ArrayList<String>(Arrays.asList(e.getValue())));
}
Java 8 introduces Map.merge(K, V, BiFunction), which makes this easy if not particularly concise:
Map<String, String> result = new HashMap<>(map1);
//or just merge into map1 if mutating it is okay
map2.forEach((k, v) -> result.merge(k, v, (a, b) ->
Integer.toString(Integer.parseInt(a) + Integer.parseInt(b))));
If you're doing this repeatedly, you're going to be parsing and creating a lot of strings. If you're generating the maps one at a time, you're best off maintaining a list of strings and only parsing and summing once.
Map<String, List<String>> deferredSum = new HashMap<>();
//for each map
mapN.forEach((k, v) ->
deferredSum.computeIfAbsent(k, x -> new ArrayList<String>()).add(v));
//when you're done
Map<String, String> result = new HashMap<>();
deferredSum.forEach((k, v) -> result.put(k,
Long.toString(v.stream().mapToInt(Integer::parseInt).sum())));
If this summing is a common operation, consider whether using Integer as your value type makes more sense; you can use Integer::sum as the merge function in that case, and maintaining lists of deferred sums would no longer be necessary.
Try this
Map<String,String> map1= new HashMap<String, String>(){{
put("no1","123"); put("no2","5434"); put("no5","234");}};
Map<String,String> map2 = new HashMap<String, String>(){{
put("no1","523"); put("no2","234"); put("no3","234");}};
Map<String,String> newMap=map1;
for(String a:map2.keySet()){
if(newMap.keySet().contains(a)){
newMap.put(a,""+(Integer.parseInt(newMap.get(a))+Integer.parseInt(map2.get(a))));
}
else{
newMap.put(a,map2.get(a));
}
}
for(String k : newMap.keySet()){
System.out.println("key : "+ k + " value : " + newMap.get(k));
}

Categories

Resources