Flattening a 3 level nested JSON string in java - java

The requirement is to create a generic flattening utility for an input JSON object to a flattened JSON object.
The sample JSON looks like the below
{
"Source": "source-1",
"Rows": [
{
"Keys": {
"device-id": "BC04-EBH-N3K-01",
"interface-name": "TenGigE0/0/0/39",
"node-name": "0/0/CPU0"
},
"Timestamp": 1567621527656,
"inner": {
"donm": {
"id": "0062",
"mol": {
"rem": 30,
"len": 11,
"org": {
"ldp": [
{
"t": 486,
"o": 322
},
{
"t": 487,
"o": 32,
"twss": 1,
"tlv": "00:01"
}
]
},
"chlen": 14,
"poe": 5,
"combs": 10,
"chaype": 4,
"rek": 0,
"rem-um": 67
},
"detail": {
"enas": "B,R",
"systes": "B,R",
"timng": 91,
"syn": "C",
"met-type": 0,
"neses": {
"lldEDIT": [
{
"ium": 830,
"m": 1,
"ass": {
"ape": "ipv4",
"ipvs": "94"
}
}
]
},
"pess": "0008",
"por]d": 0,
"pon": "BCtive",
"sysme": "BC1"
},
"reme": "Bu1",
"hean": 0,
"porl": "Et1"
}
}
}
],
"Tey": {
"epath": "Cgetail",
"sustr": "MX",
"coime": 1567621527653,
"msp": 1567621527653,
"come": 1567621527660,
"nor": "BC5",
"cid": 14789654
}
}
I have been trying to flatten it to 3 levels and came up with the below utility. But, The things are getting complicated when I have to deal with Arrays and values of type String, long, Timestamp, etc. Also, I am unable to understand how the nested keys can be maintained for uniqueness.
public static Map<String,Object> flattenJson(JsonNode input){
Map<String,Object> finalMap = new HashMap<>();
ObjectMapper datamapper = new ObjectMapper();
Map<String,Object> topLevelJsonMap = datamapper.convertValue(input,Map.class);
Set<String> topLevelKeys = topLevelJsonMap.keySet();
for(String topLevelKey : topLevelKeys){
System.out.println("Key :::: "+topLevelKey);
Object topLevelData = topLevelJsonMap.get(topLevelKey);
System.out.println("value :::: "+topLevelData.toString());
if(topLevelData instanceof ArrayNode){
ArrayNode arrayOfData = (ArrayNode) topLevelData;
for(JsonNode dataNode : arrayOfData){
flattenJson(input);
}
} else if(topLevelData instanceof JsonNode){
Map<String,Object> innerLevelJsonMap = datamapper.convertValue(topLevelData,Map.class);
Set<String> innerLevelKeys = innerLevelJsonMap.keySet();
for(String innerLevelKey : innerLevelKeys){
System.out.println("inner key :::: "+innerLevelKey);
flattenJson((JsonNode) innerLevelJsonMap.get(innerLevelKey));
}
}else {
finalMap.put(topLevelKey,topLevelData);
}
}
return finalMap;
}
Any help is greatly appreciated.

You can take a look on json-flattener.
BTW, I am the author of this lib.

To avoid conflicts with key names you can use JSON Pointer specification to create them. It is also supported by Jackson library, so you can use them later to traverse the JsonNode node.
Simple implementation could look like below:
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ArrayNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
import java.io.File;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.atomic.AtomicInteger;
public class JsonApp {
public static void main(String[] args) throws Exception {
File jsonFile = new File("./test.json");
ObjectMapper mapper = new ObjectMapper();
JsonNode root = mapper.readTree(jsonFile);
Map<String, JsonNode> map = new JsonFlattener(root).flatten();
System.out.println("Use key-value pairs:");
map.forEach(
(k, v) -> {
System.out.println(k + " => " + v);
});
System.out.println();
System.out.println("Use pointers:");
map.forEach(
(k, v) -> {
System.out.println(k + " => " + root.at(k));
});
}
}
class JsonFlattener {
private final Map<String, JsonNode> json = new LinkedHashMap<>(64);
private final JsonNode root;
JsonFlattener(JsonNode node) {
this.root = Objects.requireNonNull(node);
}
public Map<String, JsonNode> flatten() {
process(root, "");
return json;
}
private void process(JsonNode node, String prefix) {
if (node.isObject()) {
ObjectNode object = (ObjectNode) node;
object
.fields()
.forEachRemaining(
entry -> {
process(entry.getValue(), prefix + "/" + entry.getKey());
});
} else if (node.isArray()) {
ArrayNode array = (ArrayNode) node;
AtomicInteger counter = new AtomicInteger();
array
.elements()
.forEachRemaining(
item -> {
process(item, prefix + "/" + counter.getAndIncrement());
});
} else {
json.put(prefix, node);
}
}
}
Above code prints:
Use key-value pairs:
/Source => "source-1"
/Rows/0/Keys/device-id => "BC04-EBH-N3K-01"
/Rows/0/Keys/interface-name => "TenGigE0/0/0/39"
/Rows/0/Keys/node-name => "0/0/CPU0"
/Rows/0/Timestamp => 1567621527656
/Rows/0/inner/donm/id => "0062"
/Rows/0/inner/donm/mol/rem => 30
/Rows/0/inner/donm/mol/len => 11
/Rows/0/inner/donm/mol/org/ldp/0/t => 486
/Rows/0/inner/donm/mol/org/ldp/0/o => 322
/Rows/0/inner/donm/mol/org/ldp/1/t => 487
/Rows/0/inner/donm/mol/org/ldp/1/o => 32
/Rows/0/inner/donm/mol/org/ldp/1/twss => 1
/Rows/0/inner/donm/mol/org/ldp/1/tlv => "00:01"
/Rows/0/inner/donm/mol/chlen => 14
/Rows/0/inner/donm/mol/poe => 5
/Rows/0/inner/donm/mol/combs => 10
/Rows/0/inner/donm/mol/chaype => 4
/Rows/0/inner/donm/mol/rek => 0
/Rows/0/inner/donm/mol/rem-um => 67
/Rows/0/inner/donm/detail/enas => "B,R"
/Rows/0/inner/donm/detail/systes => "B,R"
/Rows/0/inner/donm/detail/timng => 91
/Rows/0/inner/donm/detail/syn => "C"
/Rows/0/inner/donm/detail/met-type => 0
/Rows/0/inner/donm/detail/neses/lldEDIT/0/ium => 830
/Rows/0/inner/donm/detail/neses/lldEDIT/0/m => 1
/Rows/0/inner/donm/detail/neses/lldEDIT/0/ass/ape => "ipv4"
/Rows/0/inner/donm/detail/neses/lldEDIT/0/ass/ipvs => "94"
/Rows/0/inner/donm/detail/pess => "0008"
/Rows/0/inner/donm/detail/por]d => 0
/Rows/0/inner/donm/detail/pon => "BCtive"
/Rows/0/inner/donm/detail/sysme => "BC1"
/Rows/0/inner/donm/reme => "Bu1"
/Rows/0/inner/donm/hean => 0
/Rows/0/inner/donm/porl => "Et1"
/Tey/epath => "Cgetail"
/Tey/sustr => "MX"
/Tey/coime => 1567621527653
/Tey/msp => 1567621527653
/Tey/come => 1567621527660
/Tey/nor => "BC5"
/Tey/cid => 14789654
Use pointers:
/Source => "source-1"
/Rows/0/Keys/device-id => "BC04-EBH-N3K-01"
/Rows/0/Keys/interface-name => "TenGigE0/0/0/39"
/Rows/0/Keys/node-name => "0/0/CPU0"
/Rows/0/Timestamp => 1567621527656
/Rows/0/inner/donm/id => "0062"
/Rows/0/inner/donm/mol/rem => 30
/Rows/0/inner/donm/mol/len => 11
/Rows/0/inner/donm/mol/org/ldp/0/t => 486
/Rows/0/inner/donm/mol/org/ldp/0/o => 322
/Rows/0/inner/donm/mol/org/ldp/1/t => 487
/Rows/0/inner/donm/mol/org/ldp/1/o => 32
/Rows/0/inner/donm/mol/org/ldp/1/twss => 1
/Rows/0/inner/donm/mol/org/ldp/1/tlv => "00:01"
/Rows/0/inner/donm/mol/chlen => 14
/Rows/0/inner/donm/mol/poe => 5
/Rows/0/inner/donm/mol/combs => 10
/Rows/0/inner/donm/mol/chaype => 4
/Rows/0/inner/donm/mol/rek => 0
/Rows/0/inner/donm/mol/rem-um => 67
/Rows/0/inner/donm/detail/enas => "B,R"
/Rows/0/inner/donm/detail/systes => "B,R"
/Rows/0/inner/donm/detail/timng => 91
/Rows/0/inner/donm/detail/syn => "C"
/Rows/0/inner/donm/detail/met-type => 0
/Rows/0/inner/donm/detail/neses/lldEDIT/0/ium => 830
/Rows/0/inner/donm/detail/neses/lldEDIT/0/m => 1
/Rows/0/inner/donm/detail/neses/lldEDIT/0/ass/ape => "ipv4"
/Rows/0/inner/donm/detail/neses/lldEDIT/0/ass/ipvs => "94"
/Rows/0/inner/donm/detail/pess => "0008"
/Rows/0/inner/donm/detail/por]d => 0
/Rows/0/inner/donm/detail/pon => "BCtive"
/Rows/0/inner/donm/detail/sysme => "BC1"
/Rows/0/inner/donm/reme => "Bu1"
/Rows/0/inner/donm/hean => 0
/Rows/0/inner/donm/porl => "Et1"
/Tey/epath => "Cgetail"
/Tey/sustr => "MX"
/Tey/coime => 1567621527653
/Tey/msp => 1567621527653
/Tey/come => 1567621527660
/Tey/nor => "BC5"
/Tey/cid => 14789654

Try this code:
public static void flattenJson(JsonNode node, String parent, Map<String, ValueNode> map) {
if (node instanceof ValueNode) {
map.put(parent, (ValueNode)node);
} else {
String prefix = parent == null ? "" : parent + ".";
if (node instanceof ArrayNode) {
ArrayNode arrayNode = (ArrayNode)node;
for(int i = 0; i < arrayNode.size(); i++) {
flattenJson(arrayNode.get(i), prefix + i, map);
}
} else if (node instanceof ObjectNode) {
ObjectNode objectNode = (ObjectNode) node;
for (Iterator<Map.Entry<String, JsonNode>> it = objectNode.fields(); it.hasNext(); ) {
Map.Entry<String, JsonNode> field = it.next();
flattenJson(field.getValue(), prefix + field.getKey(), map);
}
} else {
throw new RuntimeException("unknown json node");
}
}
}
public static Map<String, ValueNode> flattenJson(JsonNode input) {
Map<String, ValueNode> map = new LinkedHashMap<>();
flattenJson(input, null, map);
return map;
}
Then you can call
ObjectMapper om = new ObjectMapper();
JsonNode jsonNode = om.readTree(json);
Map<String, ValueNode> m = flattenJson(jsonNode);
for (Map.Entry<String, ValueNode> kv : m.entrySet()) {
System.out.println(kv.getKey() + "=" + kv.getValue().asText());
}
Output:
Source=source-1
Rows.0.Keys.device-id=BC04-EBH-N3K-01
Rows.0.Keys.interface-name=TenGigE0/0/0/39
Rows.0.Keys.node-name=0/0/CPU0
Rows.0.Timestamp=1567621527656
Rows.0.inner.donm.id=0062
Rows.0.inner.donm.mol.rem=30
Rows.0.inner.donm.mol.len=11
Rows.0.inner.donm.mol.org.ldp.0.t=486
Rows.0.inner.donm.mol.org.ldp.0.o=322
Rows.0.inner.donm.mol.org.ldp.1.t=487
Rows.0.inner.donm.mol.org.ldp.1.o=32
Rows.0.inner.donm.mol.org.ldp.1.twss=1
Rows.0.inner.donm.mol.org.ldp.1.tlv=00:01
Rows.0.inner.donm.mol.chlen=14
Rows.0.inner.donm.mol.poe=5
Rows.0.inner.donm.mol.combs=10
Rows.0.inner.donm.mol.chaype=4
Rows.0.inner.donm.mol.rek=0
Rows.0.inner.donm.mol.rem-um=67
Rows.0.inner.donm.detail.enas=B,R
Rows.0.inner.donm.detail.systes=B,R
Rows.0.inner.donm.detail.timng=91
Rows.0.inner.donm.detail.syn=C
Rows.0.inner.donm.detail.met-type=0
Rows.0.inner.donm.detail.neses.lldEDIT.0.ium=830
Rows.0.inner.donm.detail.neses.lldEDIT.0.m=1
Rows.0.inner.donm.detail.neses.lldEDIT.0.ass.ape=ipv4
Rows.0.inner.donm.detail.neses.lldEDIT.0.ass.ipvs=94
Rows.0.inner.donm.detail.pess=0008
Rows.0.inner.donm.detail.por]d=0
Rows.0.inner.donm.detail.pon=BCtive
Rows.0.inner.donm.detail.sysme=BC1
Rows.0.inner.donm.reme=Bu1
Rows.0.inner.donm.hean=0
Rows.0.inner.donm.porl=Et1
Tey.epath=Cgetail
Tey.sustr=MX
Tey.coime=1567621527653
Tey.msp=1567621527653
Tey.come=1567621527660
Tey.nor=BC5
Tey.cid=14789654

Just create namespace of the keys by appending each level key into the key from higher level.
In other words your flattened JSON's keys would be:
{
"L1key::L2key": "L2val",
"L1key": "L1val",
"L1key::L2key::L3key": "L3val"
}
This way you guarantee uniqueness but also you can create the original json from this as well. Lastly be sure that the level separator (here ::) will not be present in your key.
HTH

Related

Scala : Transforming a List into a nested Map

I have an ConfigEntry class defined as
case class ConfigEntry(
key: String,
value: String
)
and a list:
val list: List[ConfigEntry] = List(
ConfigEntry("general.first", "general first value"),
ConfigEntry("general.second", "general second value"),
ConfigEntry("custom.first", "custom first value"),
ConfigEntry("custom.second", "custom second value")
)
Given a list of ConfigEntry, I want a map from property -> map of entries that satisfy that property
As an example, if I have
def getConfig: Map[String, Map[String, String]] = {
def getKey(key: String, index: Int): String = key.split("\\.")(index)
list.map { config =>
getKey(config.key, 0) -> Map(getKey(config.key, 1) -> config.value)
}.toMap
}
I get result
res0: Map[String,Map[String,String]] =
Map(
"general" ->
Map("second" -> "general second value"),
"custom" ->
Map("second" -> "custom second value")
)
and it should be
res0: Map[String,Map[String,String]] =
Map(
"general" ->
Map(
"first" -> "general first value",
"second" -> "general second value"
),
"custom" ->
Map(
"first" -> "custom first value",
"second" -> "custom second value"
)
)
The first record from the list is missing. It's probably through .toMap
How can I do this?
Thank you for any help given
You can do something like this:
final case class ConfigEntry(
key: String,
value: String
)
type Config = Map[String, Map[String, String]]
def getConfig(data: List[ConfigEntry]): Config =
data
.view
.map(e => e.key.split('.').toList -> e.value)
.collect {
case (k1 :: k2 :: Nil, v) => k1 -> (k2 -> v)
}.groupMap(_._1)(_._2)
.view
.mapValues(_.toMap)
.toMap
Or something like this:
def getConfig(data: List[ConfigEntry]): Config = {
#annotation.tailrec
def loop(remaining: List[ConfigEntry], acc: Config): Config =
remaining match {
case ConfigEntry(key, value) :: xs =>
val newAcc = key.split('.').toList match {
case k1 :: k2 :: Nil =>
acc.updatedWith(k1) {
case Some(map) =>
val newMap = map.updatedWith(k2) {
case Some(v) =>
println(s"Overwriting previous value ${v} for the key: ${key}")
// Just overwrite the previous value.
Some(value)
case None =>
Some(value)
}
Some(newMap)
case None =>
Some(Map(k2 -> value))
}
case _ =>
println(s"Bad key: ${key}")
// Just skip this key.
acc
}
loop(remaining = xs, newAcc)
case Nil =>
acc
}
loop(remaining = data, acc = Map.empty)
}
I leave the handling of errors like duplicated keys or bad keys to the reader.
BTW, since this is a config, have you considered using a Config library?
Your map will only produce a 1 to 1 result. To do what you want you will need an accumulator (existing map) to do this.
Working with your existing code, if you're especially tied to how you're parsing your primary and secondary keys via getKey you can apply foldLeft to your list instead, with an empty map as an initial value.
list.foldLeft(Map.empty[String, Map[String, String]]) { (configs, configEntry) =>
val primaryKey = getKey(configEntry.key, 0)
val secondaryKey = getKey(configEntry.key, 1)
configs.get(primaryKey) match {
case None =>
configs.updated(primaryKey, Map(secondaryKey -> configEntry.value))
case Some(configMap) =>
configs.updated(primaryKey, configMap.updated(secondaryKey, configEntry.value))
}
}
Simply:
list.map { ce =>
val Array(l, r) = ce.key.split("\\.")
l -> (r -> ce.value)
} // List[(String, (String, String))]
.groupBy { case (k, _) => k } // Map[String, List[(String, (String, String))]]
.view.mapValues(_.map { case (_, v) => v }.toMap) // MapView[String, List[(String, String)]]
.toMap // Map[String, Map[String, String]]

Spark Accumulator

I am new to accumulators in Spark . I have created an accumulator which gathers the information of sum and count of all columns in a dataframe into a Map.
Which is not functioning as expected , so I have a few doubts.
When I run this class( pasted below) in local mode , I can see that the accumulators getting updated but the final value is still empty.For debug purposes, I added a print Statement in add() .
Q1) Why is the final accumulable not being updated when the accumulator is being added ?
For reference , I studied the CollectionsAccumulator where they have made use of SynchronizedList from Java Collections.
Q2) Does it need to be a synchronized/concurrent collection for an accumulator to update ?
Q3) Which collection will be best suited for such purpose ?
I have attached my execution flow along with spark ui snapshot for analysis.
Thanks.
EXECUTION:
INPUT DATAFRAME -
+-------+-------+
|Column1|Column2|
+-------+-------+
|1 |2 |
|3 |4 |
+-------+-------+
OUTPUT -
Add - Map(Column1 -> Map(sum -> 1, count -> 1), Column2 -> Map(sum -> 2, count -> 1))
Add - Map(Column1 -> Map(sum -> 4, count -> 2), Column2 -> Map(sum -> 6, count -> 2))
TestRowAccumulator(id: 1, name: Some(Test Accumulator for Sum&Count), value: Map())
SPARK UI SNAPSHOT -
CLASS :
class TestRowAccumulator extends AccumulatorV2[Row,Map[String,Map[String,Int]]]{
private var colMetrics: Map[String, Map[String, Int]] = Map[String , Map[String , Int]]()
override def isZero: Boolean = this.colMetrics.isEmpty
override def copy(): AccumulatorV2[Row, Map[String,Map[String,Int]]] = {
val racc = new TestRowAccumulator
racc.colMetrics = colMetrics
racc
}
override def reset(): Unit = {
colMetrics = Map[String,Map[String,Int]]()
}
override def add(v: Row): Unit = {
v.schema.foreach(field => {
val name: String = field.name
val value: Int = v.getAs[Int](name)
if(!colMetrics.contains(name))
{
colMetrics = colMetrics ++ Map(name -> Map("sum" -> value , "count" -> 1 ))
}else
{
val metric = colMetrics(name)
val sum = metric("sum") + value
val count = metric("count") + 1
colMetrics = colMetrics ++ Map(name -> Map("sum" -> sum , "count" -> count))
}
})
}
override def merge(other: AccumulatorV2[Row, Map[String,Map[String,Int]]]): Unit = {
other match {
case t:TestRowAccumulator => {
colMetrics.map(col => {
val map2: Map[String, Int] = t.colMetrics.getOrElse(col._1 , Map())
val map1: Map[String, Int] = col._2
map1 ++ map2.map{ case (k,v) => k -> (v + map1.getOrElse(k,0)) }
} )
}
case _ => throw new UnsupportedOperationException(s"Cannot merge ${this.getClass.getName} with ${other.getClass.getName}")
}
}
override def value: Map[String, Map[String, Int]] = {
colMetrics
}
}
After a bit of debug , I found that merge function is being called .
It had erroneous code so the accumulable value was Map()
EXECUTION FlOW OF ACCUMULATOR (LOCAL MODE) :
ADD
ADD
MERGE
Once I corrected the merge function , accumulator worked as expected

Get LinkedHashMapValue with Dynamic Key

I want to get the value from LinkedHashmap with dynamic key like below.
def map = [Employee1: [Status: 'Working', Id: 1], Employee2: [Status: 'Resigned', Id: 2]]
def keys = "Employee1.Status"
def keyPath = "";
def keyList = keys.tokenize(".");
keyList.eachWithIndex() { key, i ->
keyPath += "$key"
if(i != keyList.size() - 1){ keyPath += "." }
}
println keyPath //Employee1.Status
println map.keyPath //Always null
println map.'Employee1'.'Status' //Working
println map.Employee1.Status //Working
Here map.keyPath always returning null. How to get the value with dynamic key?
I think you can simply do this:
def tmpMap = map;
keyList.subList(0, keyList.size - 1).each {key ->
tmpMap = map[key]
}
println tmpMap[keyList[keyList.size - 1]]
This will extract the sub maps until the actual value key is reached. To make this more stable you should add some logic to check if the value associated with the current key is actually a map.
With curiosity I try to use just Your code.
keyPath == 'Employee1.Status' not 'Employee1'.'Status'
So to do this you can make something like this:
def map = [
Employee1:
[Status: 'Working', Id: 1],
Employee2:
[Status: 'Resigned', Id: 2]
]
def keys = "Employee1.Status"
def keyPath = "";
def keyList = keys.tokenize(".");
keyList.eachWithIndex() { key, i ->
keyPath += "$key"
if(i != keyList.size() - 1){ keyPath += '.' }
}
println keyPath //Employee1.Status
//tokenize it and get elements as a[0] and a[1]
a = keyPath.tokenize(".");
println map.(a[0]).(a[1]) //Working
println map.'Employee1'.'Status' //Working
println map.Employee1.Status //Working

How to write a custom serializer for Java 8 LocalDateTime

I have a class named Child1 which I want to convert into JSON using Lift Json. Everything is working fine i was using joda date time but now i want to use Java 8 LocalDateTime but i am unable to write custom serializer for this here is my code
import org.joda.time.DateTime
import net.liftweb.json.Serialization.{ read, write }
import net.liftweb.json.DefaultFormats
import net.liftweb.json.Serializer
import net.liftweb.json.JsonAST._
import net.liftweb.json.Formats
import net.liftweb.json.TypeInfo
import net.liftweb.json.MappingException
class Child1Serializer extends Serializer[Child1] {
private val IntervalClass = classOf[Child1]
def deserialize(implicit format: Formats): PartialFunction[(TypeInfo, JValue), Child1] = {
case (TypeInfo(IntervalClass, _), json) => json match {
case JObject(
JField("str", JString(str)) :: JField("Num", JInt(num)) ::
JField("MyList", JArray(mylist)) :: (JField("myDate", JInt(mydate)) ::
JField("number", JInt(number)) ::Nil)
) => {
val c = Child1(
str, num.intValue(), mylist.map(_.values.toString.toInt), new DateTime(mydate.longValue)
)
c.number = number.intValue()
c
}
case x => throw new MappingException("Can't convert " + x + " to Interval")
}
}
def serialize(implicit format: Formats): PartialFunction[Any, JValue] = {
case x: Child1 =>
JObject(
JField("str", JString(x.str)) :: JField("Num", JInt(x.Num)) ::
JField("MyList", JArray(x.MyList.map(JInt(_)))) ::
JField("myDate", JInt(BigInt(x.myDate.getMillis))) ::
JField("number", JInt(x.number)) :: Nil
)
}
}
Object Test extends App {
case class Child1(var str:String, var Num:Int, MyList:List[Int], myDate:DateTime) {
var number: Int=555
}
val c = Child1("Mary", 5, List(1, 2), DateTime.now())
c.number = 1
println("number" + c.number)
implicit val formats = DefaultFormats + new Child1Serializer
val ser = write(c)
println("Child class converted to string" + ser)
var obj = read[Child1](ser)
println("object of Child is "+ obj)
println("str" + obj.str)
println("Num" + obj.Num)
println("MyList" + obj.MyList)
println("myDate" + obj.myDate)
println("number" + obj.number)
}
now i want to use Java 8 LocalDateTime like this
case class Child1(var str: String, var Num: Int, MyList: List[Int], val myDate: LocalDateTime = LocalDateTime.now()) {
var number: Int=555
}
what modification do i need to make in my custom serializer class Child1Serializer i tried to do it but i was unable to do it please help me
In the serializer, serialize the date like this:
def serialize(implicit format: Formats): PartialFunction[Any, JValue] = {
case x: Child1 =>
JObject(
JField("str", JString(x.str)) :: JField("Num", JInt(x.Num)) ::
JField("MyList", JArray(x.MyList.map(JInt(_)))) ::
JField("myDate", JString(x.myDate.toString)) ::
JField("number", JInt(x.number)) :: Nil
)
}
In the deserializer,
def deserialize(implicit format: Formats): PartialFunction[(TypeInfo, JValue), Child1] = {
case (TypeInfo(IntervalClass, _), json) => json match {
case JObject(
JField("str", JString(str)) :: JField("Num", JInt(num)) ::
JField("MyList", JArray(mylist)) :: (JField("myDate", JString(mydate)) ::
JField("number", JInt(number)) ::Nil)
) => {
val c = Child1(
str, num.intValue(), mylist.map(_.values.toString.toInt), LocalDateTime.parse(myDate)
)
c.number = number.intValue()
c
}
case x => throw new MappingException("Can't convert " + x + " to Interval")
}
}
The LocalDateTime object writes to an ISO format using toString and the parse factory method should be able to reconstruct the object from such a string.
You can define the LocalDateTime serializer like this.
class LocalDateTimeSerializer extends Serializer[LocalDateTime] {
private val LocalDateTimeClass = classOf[LocalDateTime]
def deserialize(implicit format: Formats): PartialFunction[(TypeInfo, JValue), LocalDateTime] = {
case (TypeInfo(LocalDateTimeClass, _), json) => json match {
case JString(dt) => LocalDateTime.parse(dt)
case x => throw new MappingException("Can't convert " + x + " to LocalDateTime")
}
}
def serialize(implicit format: Formats): PartialFunction[Any, JValue] = {
case x: LocalDateTime => JString(x.toString)
}
}
Also define your formats like this
implicit val formats = DefaultFormats + new LocalDateTimeSerializer + new FieldSerializer[Child1]
Please note the usage of FieldSerializer to serialize the non-constructor filed, number

Is there a cleaner way to do this Group Query in MongoDB from Groovy?

I'm working on learning MongoDB. Language of choice for the current run at it is Groovy.
Working on Group Queries by trying to answer the question of which pet is the most needy one.
Below is my first attempt and it's awful. Any help cleaning this up (or simply confirming that there isn't a cleaner way to do it) would be much appreciated.
Thanks in advance!
package mongo.pets
import com.gmongo.GMongo
import com.mongodb.BasicDBObject
import com.mongodb.DBObject
class StatsController {
def dbPets = new GMongo().getDB('needsHotel').getCollection('pets')
//FIXME OMG THIS IS AWFUL!!!
def index = {
def petsNeed = 'a walk'
def reduce = 'function(doc, aggregator) { aggregator.needsCount += doc.needs.length }'
def key = new BasicDBObject()
key.put("name", true)
def initial = new BasicDBObject()
initial.put ("needsCount", 0)
def maxNeeds = 0
def needyPets = []
dbPets.group(key, new BasicDBObject(), initial, reduce).each {
if (maxNeeds < it['needsCount']) {
maxNeeds = it['needsCount']
needyPets = []
needyPets += it['name']
} else if (maxNeeds == it['needsCount']) {
needyPets += it['name']
}
}
def needyPet = needyPets
[petsNeedingCount: dbPets.find([needs: petsNeed]).count(), petsNeed: petsNeed, mostNeedyPet: needyPet]
}
}
It should be possible to be change the whole method to this (but I don't have MongoDB to test it)
def index = {
def petsNeed = 'a walk'
def reduce = 'function(doc, aggregator) { aggregator.needsCount += doc.needs.length }'
def key = [ name: true ] as BasicDBObject
def initial = [ needsCount: 0 ] as BasicDBObject
def allPets = dbPets.group( key, new BasicDBObject(), initial, reduce )
def maxNeeds = allPets*.needsCount.collect { it as Integer }.max()
def needyPet = allPets.findAll { maxNeeds == it.needsCount as Integer }.name
[petsNeedingCount: dbPets.find([needs: petsNeed]).count(), petsNeed: petsNeed, mostNeedyPet: needyPet]
}

Categories

Resources