Hi I'm trying to create yaml file using java and snakeyaml but i did basic coding on it and wanted to improvise on the creation of hash maps and array list with class objects anyone can help in refactoring this code as I'm new to java.I want help in specifically creating classes for the redundant objects created in createmap function
Sample Code
package com.yaml.writer;
import org.yaml.snakeyaml.DumperOptions;
import org.yaml.snakeyaml.Yaml;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class JavaToYamlWithOptions {
public static void main(String[] args) {
Map<String,Object> map = createMap();
DumperOptions options = new DumperOptions();
options.setIndent(2);
options.setDefaultFlowStyle(DumperOptions.FlowStyle.BLOCK);
options.setPrettyFlow(true);
Yaml yaml = new Yaml(options);
String output = yaml.dump(map);
System.out.println(output);
}
private static Map<String,Object> createMap() {
Map<String,Object> env = new HashMap<>();
Map<String,String> rpath = new HashMap<>();
Map<String,String> rtab = new HashMap<>();
List<Map<String,String>> rlist = new ArrayList<>();
rpath.put("path","hdfs");
rtab.put("tabname","tbl");
rlist.add(rpath);
rlist.add(rtab);
Map<String,List<Map<String,String>>> parq = new HashMap<>();
parq.put("parquet",rlist);
List<Map<String,List<Map<String,String>>>> readlst = new ArrayList<>();
readlst.add(parq);
Map<String,List<Map<String,List<Map<String,String>>>>> readmap = new HashMap<>();
readmap.put("read", readlst);
Map<String,String> coltrans = new HashMap<>();
Map<String,String> colsec = new HashMap<>();
List<Map<String,String>> collist = new ArrayList<>();
coltrans.put("coltrans","colxx");
colsec.put("coal","false");
collist.add(coltrans);
collist.add(colsec);
Map<String,List<Map<String,String>>> trans = new HashMap<>();
trans.put("transform",collist);
List<Map<String,List<Map<String,String>>>> trlist = new ArrayList<>();
trlist.add(trans);
Map<String,List<Map<String,List<Map<String,String>>>>> trmap = new HashMap<>();
trmap.put("transform", trlist);
env.put("environment" ,"kubernetes");
Map<String,String> wpath = new HashMap<>();
List<Map<String,String>> wlist = new ArrayList<>();
wpath.put("path","whdfs");
wlist.add(wpath);
Map<String,List<Map<String,String>>> wparq = new HashMap<>();
wparq.put("parquet",wlist);
List<Map<String,List<Map<String,String>>>> wrlst = new ArrayList<>();
wrlst.add(wparq);
Map<String,List<Map<String,List<Map<String,String>>>>> wmap = new HashMap<>();
wmap.put("write", wrlst);
Map<String,Object> action = new HashMap<>();
trmap.putAll(wmap);
readmap.putAll(trmap);
action.put("action1",readmap);
Map<String,Object> actions = new HashMap<>();
actions.put("action",action);
env.putAll(actions);
return env;
}
}
expected output
environment: "kubernetes"
action:
action1:
transform:
- transform:
- colTrans: colxx
- coal: false
read:
- parquet:
- path: "hdfs"
- tabname: "tbl"
write:
- parquet:
- path: "whdfs"
Related
I have the following mapping: mapTechnicaltoM. My leader asked me to use orika mappping in mapTechnicaltoM, can someone please help me with this?
List<P> pList = new ArrayList<>();
Iterator<Map.Entry<String, Map<String, Technical>>> pIterator = asset.getP().entrySet().iterator();
while (pIterator.hasNext()) {
Map.Entry<String, Map<String, Technical>> pPair = Iterator.next();
Iterator<Map.Entry<String, Technical>> pTIterator = (pPair.getValue()).entrySet().iterator();
while (pTIterator.hasNext()) {
Map.Entry pTRsPair = pTIterator.next();
P eP = new P();
eP.setCode(pPair.getKey());
eP.setTypeCode(String.valueOf(pTRsPair.getKey()).toUpperCase());
mapPerilTechnicalRatesToNaturalPerils((Technical) pTRsPair.getValue(), eP);
pList.add(eP);
}
}
private void mapTechnicaltoM(Technical technical, P pp){
pp.setStatusCode(technical.getStatusDetails().getStatusCode());
pp.setPremium(technical.getPremiumDetails().getRetainedPremium());
}
I am trying to insert a row to BigQuery using java.
The entity I was inserting has a field which is double nested.
Generating the entity suitable for BigQuery:
ObjectMapper mapper = new ObjectMapper();
String barcodeDetailsJSON = order.getBarcodeDetailsJSON();
List<StateForBQ> stateForBQList = new ArrayList<StateForBQ>();
for (State state : order.getStates()) {
StateForBQ stateForBQ = new StateForBQ(state);
stateForBQ.setSetOn(new Date(stateForBQ.getSetOn().getTime()/1000));
stateForBQList.add(stateForBQ);
}
List<BarcodeDetailForBQ> barcodeDetailForBQList = getBarcodeDetailsFromBarcodeDetailsJSON(barcodeDetailsJSON, order.getIsGrouped());
Without the following, state is getting set as null. (State is nested entity)
List<Map<String, Object>> stateMap =
mapper.convertValue(stateForBQList, new TypeReference<List<Map<String, Object>>>() {});
Without the following, barcodeDetails is getting set as null. (BarcodeDetails is double nested entity)
List<Map<String, Object>> barcodeMapList =
mapper.convertValue(barcodeDetailForBQList, new TypeReference<List<Map<String, Object>>>() {});
Without the follwing, productPriceDetails, productDetails, cgst, sgst are getting set as null
for (Map<String, Object> barcodeMap : barcodeMapList) {
barcodeMap.put("productPriceDetails", mapper.convertValue(barcodeMap.get("productPriceDetails"), new TypeReference<Map<String, Object>>() {}));
barcodeMap.put("productDetails", mapper.convertValue(barcodeMap.get("productDetails"), new TypeReference<Map<String, Object>>() {}));
barcodeMap.put("cgst", mapper.convertValue(barcodeMap.get("cgst"), new TypeReference<Map<String, Object>>() {}));
barcodeMap.put("sgst", mapper.convertValue(barcodeMap.get("sgst"), new TypeReference<Map<String, Object>>() {}));
}
Preparing the rowcontent
Map<String, Object> rowContent = new HashMap<>();
rowContent.put("orderId", order.getOrderId());
rowContent.put("customerId", order.getCustomerId());
rowContent.put("barcodeDetails", barcodeMapList);
rowContent.put("states", stateMap);
Inserting to BigQuery
Gson gson = new Gson();
BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
String datasetName = "Latest_Data";
String tableName= "ORDER_TEMP";
// [START bigquery_table_insert_rows]
TableId tableId = TableId.of(datasetName, tableName);
// Values of the row to insert
String barcodeDetailsJSON = order.getBarcodeDetailsJSON();
List<BarcodeDetailForBQ> barcodeDetailForBQList = new OrderForBQ().getBarcodeDetailsFromBarcodeDetailsJSON(barcodeDetailsJSON, order.getIsGrouped());
String recordsContentString = gson.toJson(rowContent);
InsertAllResponse response =
bigquery.insertAll(
InsertAllRequest.newBuilder(tableId)
.addRow(""+orderId, rowContent)
// More rows can be added in the same RPC by invoking .addRow() on the builder
.build());
if (response.hasErrors()) {
// If any of the insertions failed, this lets you inspect the errors
for (Entry<Long, List<BigQueryError>> entry : response.getInsertErrors().entrySet()) {
// inspect row error
}
}
Following is the response I am getting.
{
insertErrors: {
0: [
{
reason: "invalid",
location: "",
message: "Repeated record added outside of an array."
}
]
}
}
I have to add new column with value of UUID. I have done this using Spark 1.4 Java using following code.
StructType objStructType = inputDataFrame.schema();
StructField []arrStructField=objStructType.fields();
List<StructField> fields = new ArrayList<StructField>();
List<StructField> newfields = new ArrayList<StructField>();
List <StructField> listFields = Arrays.asList(arrStructField);
StructField a = DataTypes.createStructField(leftCol,DataTypes.StringType, true);
fields.add(a);
newfields.addAll(listFields);
newfields.addAll(fields);
final int size = objStructType.size();
JavaRDD<Row> rowRDD = inputDataFrame.javaRDD().map(new Function<Row, Row>() {
private static final long serialVersionUID = 3280804931696581264L;
public Row call(Row tblRow) throws Exception {
Object[] newRow = new Object[size+1];
int rowSize= tblRow.length();
for (int itr = 0; itr < rowSize; itr++)
{
if(tblRow.apply(itr)!=null)
{
newRow[itr] = tblRow.apply(itr);
}
}
newRow[size] = UUID.randomUUID().toString();
return RowFactory.create(newRow);
}
});
inputDataFrame = objsqlContext.createDataFrame(rowRDD, DataTypes.createStructType(newfields));
I'm wondering if there is some neat way to doing in Spark 2. Please advice.
You can register udf for getting UUID and use callUDF function to add new column to your inputDataFrame. Please see the sample code using Spark 2.0.
public class SparkUUIDSample {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder().appName("SparkUUIDSample").master("local[*]").getOrCreate();
//sample input data
List<Tuple2<String, String>> inputList = new ArrayList<Tuple2<String, String>>();
inputList.add(new Tuple2<String, String>("A", "v1"));
inputList.add(new Tuple2<String, String>("B", "v2"));
//dataset
Dataset<Row> df = spark.createDataset(inputList, Encoders.tuple(Encoders.STRING(), Encoders.STRING())).toDF("key", "value");
df.show();
//register udf
UDF1<String, String> uuid = str -> UUID.randomUUID().toString();
spark.udf().register("uuid", uuid, DataTypes.StringType);
//call udf
df.select(col("*"), callUDF("uuid", col("value"))).show();
//stop
spark.stop();
}
}
HashMap<String, HashMap<String, String>> hm = new HashMap<String, HashMap<String, String>>();
hm.put("Title1","Key1");
for(int i=0;i<2;i++) {
HashMap<String, String> hm1 = new HashMap<String, String>();
hm1.put("Key1","Value1");
}
if i have call Title1 that time they call another hashmap. i want
this type of output
hm<key,value(object hm1)>
hm<key,value)
first hashmap object call second hashmap key
If I correct undestand what you want, use following code
HashMap<String, HashMap<String, String>> hm = new HashMap<>();
HashMap<String, String> hm1 = new HashMap<>();
for(int i=0;i<2;i++) {
hm1.put("Key1","Value1");
}
hm.put("Title1", hm1); // save hm
...
HashMap<String, String> hm2 = hm.get("Title1");
String s = hm2.get("Key1"); // s = "Value1"
OR you can create new class
class HashKey {
private String title;
private String key;
...
// getters, setters, constructor, hashcode and equals
}
and just use HashMap < HashKey, String > hm, for example:
hm.put(new HashKey("Title1", "Key 1"), "Value");
...
String s = hm.get(new HashKey("Title1", "Key 1")); // Value
you can do something likewise,
HashMap<String,HashMap<String,String>> hm = new HashMap<String,HashMap<String,String>>();
HashMap<String,String> hm1 = new HashMap<String,String>();
hm1.put("subkey1","subvalue");
hm.put("Title1",hm1);
HashMap<String,String> newhm = hm.get("Title1");
import java.util.HashMap;
import java.util.Map;
public class MapInMap {
Map<String, Map<String, String>> standards =
new HashMap<String, Map<String, String>>();
void addValues() {
Map<String, String> studentA = new HashMap<String, String>();
studentA.put("A1", "49");
studentA.put("A2", "45");
studentA.put("A3", "43");
studentA.put("A4", "39");
standards.put("A", studentA);
Map<String, String> studentB = new HashMap<String, String>();
studentB.put("B1", "29");
studentB.put("B2", "25");
studentB.put("B3", "33");
studentB.put("B4", "29");
standards.put("B", studentB);
}
void disp() {
for (Map.Entry<String, Map<String, String>> entryL1 : standards
.entrySet()) {
System.out.println("Standard :" + entryL1.getKey());
for (Map.Entry<String, String> entryL2 : entryL1.getValue()
.entrySet()) {
System.out.println(entryL2.getKey() + "/" + entryL2.getValue());
}
}
}
public static void main(String args[]) {
MapInMap inMap = new MapInMap();
inMap.addValues();
inMap.disp();
}
}
First, sorry for my poor English.
Second, my problem.
I trying convert to JSON and back this structure:
class Revision{
private String auth;
private HashMap<String, List<HashMap<String, Object>>> rev;
public String getAuth(){
return auth;
}
public HashMap<String, List<HashMap<String, Object>>> getRev(){
return rev;
}
public void setAuth(String auth){
this.auth = auth;
}
public void setRev(HashMap<String, List<HashMap<String, Object>>> rev){
this.rev = (HashMap<String, List<HashMap<String, Object>>>) rev.clone();
}
public String toString(){
return "Auth: " + auth + ", rev: " + rev;
}
}
I do it with this code:
public static void main (String[] argc){
Gson gson = new Gson();
Revision revision = new Revision();
HashMap<String, List<HashMap<String, Object>>> HM = new HashMap<String, List<HashMap<String, Object>>>();
List<HashMap<String, Object>> list = new ArrayList<HashMap<String, Object>>();
HashMap<String, Object> HMin = new HashMap<String, Object>();
HMin.put("id", 12);
HMin.put("type", "toster");
list.add(HMin);
HM.put("mark", list);
revision.setRev(HM);
revision.setAuth("ololo");
String json = gson.toJson(revision);
Revision test = new Gson().fromJson(json, Revision.class);
System.out.println(json);
System.out.println(revision);
System.out.println(test);
}
In finally I get this result:
{"auth":"ololo","rev":{"mark":[{"id":12,"type":"toster"}]}}
Auth: ololo, rev: {mark=[{id=12, type=toster}]}
Auth: ololo, rev: {mark=[{id=java.lang.Object#1c672d0, type=java.lang.Object#19bd03e}]}
As you can see, after convertation, Object-type parameters incorrect.
Please, can you tell me, how I can fix this trouble?
Thank you in advance!
Try this out and see if it is working? Yeah, I know you want to support Object type, but this is just for try sake.
Gson gson = new Gson();
Revision revision = new Revision();
HashMap<String, List<HashMap<String, String>>> HM = new HashMap<String, List<HashMap<String, String>>>();
List<HashMap<String, String>> list = new ArrayList<HashMap<String, String>>();
HashMap<String, String> HMin = new HashMap<String, String>();
HMin.put("id", "12");
HMin.put("type", "toster");
list.add(HMin);
HM.put("mark", list);
revision.setRev(HM);
revision.setAuth("ololo");
String json = gson.toJson(revision);
Revision test = new Gson().fromJson(json, Revision.class);
System.out.println(json);
System.out.println(revision);
System.out.println(test);
[Edited]
Now try this snippet directly, with a respective change in Revision class.
Revision test = new Gson().fromJson("{\"auth\":\"ololo\",\"rev\":{\"mark\":[{\"id\":12,\"type\":13}]}}", Revision.class);
System.out.println(test);
Change this in Revision class to this,
HashMap<String, List<HashMap<String, Integer>>> HM = new HashMap<String, List<HashMap<String, Integer>>>();
This is to make sure that its working good with specific type. If it does, we will be sure that it can't work with Obejct type somehow. Then you can file a bug there, for their good. And for the time being you can switch to some other API, if you like to. You can find few options here.