Get LinkedHashMapValue with Dynamic Key - java

I want to get the value from LinkedHashmap with dynamic key like below.
def map = [Employee1: [Status: 'Working', Id: 1], Employee2: [Status: 'Resigned', Id: 2]]
def keys = "Employee1.Status"
def keyPath = "";
def keyList = keys.tokenize(".");
keyList.eachWithIndex() { key, i ->
keyPath += "$key"
if(i != keyList.size() - 1){ keyPath += "." }
}
println keyPath //Employee1.Status
println map.keyPath //Always null
println map.'Employee1'.'Status' //Working
println map.Employee1.Status //Working
Here map.keyPath always returning null. How to get the value with dynamic key?

I think you can simply do this:
def tmpMap = map;
keyList.subList(0, keyList.size - 1).each {key ->
tmpMap = map[key]
}
println tmpMap[keyList[keyList.size - 1]]
This will extract the sub maps until the actual value key is reached. To make this more stable you should add some logic to check if the value associated with the current key is actually a map.

With curiosity I try to use just Your code.
keyPath == 'Employee1.Status' not 'Employee1'.'Status'
So to do this you can make something like this:
def map = [
Employee1:
[Status: 'Working', Id: 1],
Employee2:
[Status: 'Resigned', Id: 2]
]
def keys = "Employee1.Status"
def keyPath = "";
def keyList = keys.tokenize(".");
keyList.eachWithIndex() { key, i ->
keyPath += "$key"
if(i != keyList.size() - 1){ keyPath += '.' }
}
println keyPath //Employee1.Status
//tokenize it and get elements as a[0] and a[1]
a = keyPath.tokenize(".");
println map.(a[0]).(a[1]) //Working
println map.'Employee1'.'Status' //Working
println map.Employee1.Status //Working

Related

Spark Accumulator

I am new to accumulators in Spark . I have created an accumulator which gathers the information of sum and count of all columns in a dataframe into a Map.
Which is not functioning as expected , so I have a few doubts.
When I run this class( pasted below) in local mode , I can see that the accumulators getting updated but the final value is still empty.For debug purposes, I added a print Statement in add() .
Q1) Why is the final accumulable not being updated when the accumulator is being added ?
For reference , I studied the CollectionsAccumulator where they have made use of SynchronizedList from Java Collections.
Q2) Does it need to be a synchronized/concurrent collection for an accumulator to update ?
Q3) Which collection will be best suited for such purpose ?
I have attached my execution flow along with spark ui snapshot for analysis.
Thanks.
EXECUTION:
INPUT DATAFRAME -
+-------+-------+
|Column1|Column2|
+-------+-------+
|1 |2 |
|3 |4 |
+-------+-------+
OUTPUT -
Add - Map(Column1 -> Map(sum -> 1, count -> 1), Column2 -> Map(sum -> 2, count -> 1))
Add - Map(Column1 -> Map(sum -> 4, count -> 2), Column2 -> Map(sum -> 6, count -> 2))
TestRowAccumulator(id: 1, name: Some(Test Accumulator for Sum&Count), value: Map())
SPARK UI SNAPSHOT -
CLASS :
class TestRowAccumulator extends AccumulatorV2[Row,Map[String,Map[String,Int]]]{
private var colMetrics: Map[String, Map[String, Int]] = Map[String , Map[String , Int]]()
override def isZero: Boolean = this.colMetrics.isEmpty
override def copy(): AccumulatorV2[Row, Map[String,Map[String,Int]]] = {
val racc = new TestRowAccumulator
racc.colMetrics = colMetrics
racc
}
override def reset(): Unit = {
colMetrics = Map[String,Map[String,Int]]()
}
override def add(v: Row): Unit = {
v.schema.foreach(field => {
val name: String = field.name
val value: Int = v.getAs[Int](name)
if(!colMetrics.contains(name))
{
colMetrics = colMetrics ++ Map(name -> Map("sum" -> value , "count" -> 1 ))
}else
{
val metric = colMetrics(name)
val sum = metric("sum") + value
val count = metric("count") + 1
colMetrics = colMetrics ++ Map(name -> Map("sum" -> sum , "count" -> count))
}
})
}
override def merge(other: AccumulatorV2[Row, Map[String,Map[String,Int]]]): Unit = {
other match {
case t:TestRowAccumulator => {
colMetrics.map(col => {
val map2: Map[String, Int] = t.colMetrics.getOrElse(col._1 , Map())
val map1: Map[String, Int] = col._2
map1 ++ map2.map{ case (k,v) => k -> (v + map1.getOrElse(k,0)) }
} )
}
case _ => throw new UnsupportedOperationException(s"Cannot merge ${this.getClass.getName} with ${other.getClass.getName}")
}
}
override def value: Map[String, Map[String, Int]] = {
colMetrics
}
}
After a bit of debug , I found that merge function is being called .
It had erroneous code so the accumulable value was Map()
EXECUTION FlOW OF ACCUMULATOR (LOCAL MODE) :
ADD
ADD
MERGE
Once I corrected the merge function , accumulator worked as expected

Map.computeIfPresent() to initialize a List in Java

I have the following code:
for (String val: values) {
EnumType type = EnumType.get(val);
if (type != null) {
String key = type.name();
if (key.equals("camp2"))
key = "camp1";
ArrayList<String> tempList= mapN.get(key); //1
if (tempList == null) { // 2
tempList = new ArrayList<>();
}
tempList.add(val);
mapN.put(key, tempList); //3
}
}
where mapN, and valies are:
private Map<String, ArrayList<String>> mapN
ArrayList<String> values
type is an enum
And I have sonar, and sonar tell me that in the values with // 1,2,3 I need to use:
Map.computeIfPresent()
But I have reading about this topic, and I didn't find the way to change my code.
Who can help me?
I guess you can shorten that to:
values.forEach(val -> {
EnumType type = EnumType.get(val);
if(type != null){
String key = type.name();
if (key.equals("camp2"))
key = "camp1";
mapN.computeIfAbsent(key, x -> new ArrayList<>()).add(val);
}
});
A modified version of #Eugene's answer yields:
values.forEach(val -> Optional.of(val)
// get corresponding type, if it exists
.map(EnumType::get)
// get key
.map(EnumType::name)
// handle key == "camp2" scenario
.map(key -> key.equals("camp2")
? "camp1"
: key
)
// add val to map value list
.ifPresent(key -> mapN.computeIfAbsent(
key,
x -> new ArrayList<>()
).add(val))
});

JUnit5 assertAll

The code is as shown below. I want it go test all the elements of keyNames.
But, it stops if any test fails and doesn't iterate through all array elements.
My understanding being, in assertAll all assertions are executed, and any failures should be reported together.
Thanks for your time and help.
private void validateData(SearchHit searchResult, String [] line){
for(Integer key : keyNames){
String expectedValue = getExpectedValue(line, key);
String elementName = mappingProperties.getProperty(key.toString());
if (elementName != null && elementName.contains(HEADER)){
assertAll(
() -> assumingThat((expectedValue != null && expectedValue.length() > 0),
() -> {
String actualValue = testHelper.getHeaderData(searchResult, elementName);
if(expectedValue != null) {
assertEquals(expectedValue, actualValue, " ###Element Name -> " + elementName +" : Excepted Value ### " + expectedValue + " ### Actual Value ###" + actualValue);
}
}
)
);
}
}
}
The javadoc of Assertions.assertAll() states :
Asserts that all supplied executables do not throw exceptions.
And actually you provide a single Executable in assertAll() at each iteration.
So a failure in any iteration of the loop terminates the test execution.
In fact you invoke multiple times assertAll() by requesting at most a single assertion at each time :
if(expectedValue != null) {
assertEquals(expectedValue, actualValue, " ###Element Name -> " + elementName +" : Excepted Value ### " + expectedValue + " ### Actual Value ###" + actualValue);
}
What you want is doing the reverse : invoking assertAll() by passing multiple Executable instances performing the required assertions.
So you could collect them in a List with a classic loop and pass it in this way :
assertAll(list.stream()) or creating a Stream<Executable> without any collect and pass it directly such as assertAll(stream).
Here is a version with Stream (not tested at all) but you should get the idea :
Stream<Executable> executables =
keyNames.stream()
.map(key->
// create an executable for each value of the streamed list
()-> {
String expectedValue = getExpectedValue(line, key);
String elementName = mappingProperties.getProperty(key.toString());
if (elementName != null && elementName.contains(HEADER)){
assumingThat((expectedValue != null && expectedValue.length() > 0),
() -> {
String actualValue = testHelper.getHeaderData(searchResult, elementName);
if(expectedValue != null) {
assertEquals(expectedValue, actualValue, " ###Element Name -> " + elementName +" : Excepted Value ### " + expectedValue + " ### Actual Value ###" + actualValue);
}
}
);
}
}
);
Assertions.assertAll(executables);
assertAll() groups all assertions and errors that were passed to that invocation of assertAll(). It won't group assertions across all invocations that occur during the test method.
In the code you posted, you pass a single assertion lambda into assertAll(). It won't group errors across the multiple keys, as each key has a separate invocation of assertAll().
To ensure you separately test all values in a collection, take a look at parameterized tests.
As indicated by #user31601, parameterized tests (see documentation) automatically test all cases independently.
This leads to the following (somewhat simpler) code):
#ParameterizedTest
#MethodSource("getKeys")
void testKey(String key) {
String elementName = mappingProperties.getProperty(key.toString());
assumeTrue(elementName != null);
assumeTrue(elementName.contains(HEADER));
String expectedValue = getExpectedValue(line, key);
assumeTrue(expectedValue != null);
assumeTrue(expectedValue.length() > 0);
String actualValue = testHelper.getHeaderData(searchResult, elementName);
String doc = String.format("name: %s, expected %s, actual %s", elementName, expectedValue, actualValue);
assertEquals(expectedValue, actualValue, doc);
}
private static Stream<String> getKeys() {
return keyNames.stream()
}

create a map from list in Scala

I need to create a HashMap of directory-to-file in scala while I list all files in the directory. How can I achieve this in scala?
val directoryToFile = awsClient.listFiles(uploadPath).collect {
case path if !path.endsWith("/") => {
path match {
// do some regex matching to get directory & file names
case regex(dir, date) => {
// NEED TO CREATE A HASH MAP OF dir -> date. How???
}
case _ => None
}
}
}
The method listFiles(path: String) returns a Seq[String] of absolute path of all files in the path passed as argument to the function
Try to write more idiomatic Scala. Something like this:
val directoryToFile = (for {
path <- awsClient.listFiles(uploadPath)
if !path.endsWith("/")
regex(dir, date) <- regex.findFirstIn(path)
} yield dir -> date).sortBy(_._2).toMap
You can filter and then foldLeft:
val l = List("""/opt/file1.txt""", """/opt/file2.txt""")
val finalMap = l
.filter(!_.endsWith("/"))
.foldLeft(Map.empty[String, LocalDateTime])((map, s) =>
s match {
case regex(dir, date) => map + (dir -> date)
case _ => map
}
)
You can try something like this:
val regex = """(\d)-(\d)""".r
val paths = List("1-2", "3-4", "555")
for {
// Hint to Scala to produce specific type
_ <- Map("" -> "")
// Not sure why your !path.endsWith("/") is not part of regex
path#regex(a, b) <- paths
if path.startsWith("1")
} yield (a, b)
//> scala.collection.immutable.Map[String,String] = Map(1 -> 2)
Slightly more complicated if you need max:
val regex = """(\d)-(\d)""".r
val paths = List("1-2", "3-4", "555", "1-3")
for {
(_, ps) <-
( for {
path#regex(a, b) <- paths
if path.startsWith("1")
} yield (a, b)
).groupBy(_._1)
} yield ps.maxBy(_._2)
//> scala.collection.immutable.Map[String,String] = Map(1 -> 3)

Cassandra Hector : How to retrieve all keyrows value in family?

I am newbie in casssandra and wants to fetch all rowkey value from CloumnFamily.
suppose I have User CoulmnFamily look like this
list User
RowKey: amit
=> (column=fullname, value=amitdubey, timestamp=1381832571947000)
-------------------
RowKey: jax1
=> (column=fullname, value=jagveer1, timestamp=1381655141564000)
-------------------
RowKey: jax2
=> (column=fullname, value=amitdubey, timestamp=1381832571947000)
-------------------
RowKey: jax3
=> (column=fullname, value=jagveer3, timestamp=1381655141564000)
-------------------
I am looking for the example code to retrieve the all keyrows value of the family.
Something like this:
amit
jax1
jax2
jax3
My Cassandra Version is 1.1
appreciate any help
You can do it simply by using RangeSlicesQuery in hector client.
RangeSlicesQuery<String, String, String> rangeSlicesQuery =
HFactory.createRangeSlicesQuery(keyspace, ss, ss, ss)
.setColumnFamily("User")
.setRange(null, null, false, rowCount)
.setRowCount(rowCount)
.setReturnKeysOnly();
String lastKey = null;
while (true) {
rangeSlicesQuery.setKeys(lastKey, null);
QueryResult<OrderedRows<String, String, String>> result = rangeSlicesQuery.execute();
OrderedRows<String, String, String> rows = result.get();
Iterator<Row<String, String, String>> rowsIterator = rows.iterator();
/**
* we'll skip this first one, since it is the same as the last one from previous time we executed.
*/
if (lastKey != null && rowsIterator != null) {
rowsIterator.next();
}
while (rowsIterator.hasNext()) {
Row<String, String, String> row = rowsIterator.next();
lastKey = row.getKey();
System.out.println(lastkey);
}
if (rows.getCount() < rowCount) {
break;
}
}

Categories

Resources