SnakeYAML formatting - remove YAML curly brackets - java

I have a code that is dumping a linkedhashmap into a YAML object
public class TestDump {
public static void main(String[] args) {
LinkedHashMap<String, Object> values = new LinkedHashMap<String, Object>();
values.put("one", 1);
values.put("two", 2);
values.put("three", 3);
DumperOptions options = new DumperOptions();
options.setIndent(2);
options.setPrettyFlow(true);
Yaml output = new Yaml(options);
File targetYAMLFile = new File("C:\\temp\\sample.yaml");
FileWriter writer =null;
try {
writer = new FileWriter(targetYAMLFile);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
output.dump(values, writer);
}
}
But the output looks like this
{
one: 1,
two: 2,
three: 3
}
is there a way to set this to something like this
one: 1
two: 2
three: 3
Although the first one is a valid yaml..I wanted the output format to be like the second one.

It looks like this is just some configuration via DumperOptions:
public class TestDump {
public static void main(String[] args) {
LinkedHashMap<String, Object> values = new LinkedHashMap<String, Object>();
values.put("one", 1);
values.put("two", 2);
values.put("three", 3);
DumperOptions options = new DumperOptions();
options.setIndent(2);
options.setPrettyFlow(true);
// Fix below - additional configuration
options.setDefaultFlowStyle(DumperOptions.FlowStyle.BLOCK);
Yaml output = new Yaml(options);
File targetYAMLFile = new File("C:\\temp\\sample.yaml");
FileWriter writer =null;
try {
writer = new FileWriter(targetYAMLFile);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
output.dump(values, writer);
}
}
This will solve my problem

Related

Append to existing CSV with headers

I have a method that appends to a .csv file but the problem is that it adds a header row everytime as well. How can I append to the .csv correctly?
I am aware that adding to a List would do the job but this method is called in separate runs.
public static void writeToCSVFileAndSend(String facilityId, int candidateStockTakeContainersCount) throws IOException {
FileWriter report = new FileWriter("/tmp/MonthlyExpectedComplianceSuggestions.csv", true);
LocalDate today = java.time.LocalDate.now();
String[] headers = { "Warehouse", "Expected Count for "+ today.getMonth().getDisplayName(TextStyle.SHORT, Locale.ENGLISH)};
Map<String, Integer> facilityExpectedMonthlyCountMap= new HashMap<String, Integer>() {
{
put(facilityId, candidateStockTakeContainersCount);
}
};
try (CSVPrinter printer = new CSVPrinter(report, CSVFormat.DEFAULT
.withHeader(headers))) {
facilityExpectedMonthlyCountMap.forEach((a, b) -> {
try {
printer.printRecord(a, b);
} catch (IOException e) {
e.printStackTrace();
}
});
}
}
Current Output
Warehouse,Expected Count for Dec
A,2147
Warehouse,Expected Count for Dec
B,0
Expected Output
Warehouse,Expected Count for Dec
A,2147
B,0
To avoid multiple headers, you should create object of CSVPrinter once and reuse it
Depending on how you are getting the data, you may split the function in two and pass CSVPrinter object around.
public static void writeToCSVFileAndSend() throws IOException
{
File outputCSV = new File( "/tmp/MonthlyExpectedComplianceSuggestions.csv");
LocalDate today = java.time.LocalDate.now();
String[] headers = { "Warehouse", "Expected Count for "+ today.getMonth().getDisplayName(TextStyle.SHORT, Locale.ENGLISH)};
boolean headerRequired = true;
if( outputCSV.exists()){
headerRequired = false;
}
CSVPrinter printer = null;
if( headerRequired){
printer = new CSVPrinter(report, CSVFormat.DEFAULT.withHeader(headers));
}
else{
printer = new CSVPrinter(report);
}
// Iterate through combination of facilityId and candidateStockTakeContainersCount and
// call print record
Map<String, Integer> facilityExpectedMonthlyCountMap= new HashMap<String, Integer>();
// fill in your data in map here
facilityExpectedMonthlyCountMap.forEach((a, b) -> {
try {
printer.printRecord(a, b);
} catch (IOException e) {
e.printStackTrace();
}
});
}

any way to use JavaSparkContext in JavaRdd.map(rdd -> {})?

I am thinking doing the following code. However, I got an error saying JavaSparkContext (sc) is not serializable. I am wondering if there is anyway to bypass this?
javaRdd.map(rdd -> {
List<String> data = new ArrayList<>();
ObjectMapper mapper = new ObjectMapper();
for (EntityA a : rdd) {
String json = null;
try {
json = mapper.writeValueAsString(a);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
data.add(json);
}
JavaRDD<String> rddData = sc.parallelize(data);
DataFrame df = sqlContext.read().schema(schema).json(rddData);
});

How can I write an object into a txt or csv file with?

All I get is an address of the hashmap, filewriter doesn't work, I've tried everything: scanner, bufferedwriter, re-writing the write() method.
public class hmapTest implements java.io.Serializable {
public static void main(String[] args) {
HashMap<Integer, String> hmap3 = new HashMap<Integer, String>();
// add elements to hashmap
hmap3.put(1, "to");
hmap3.put(2, "in");
hmap3.put(3, "at");
hmap3.put(4, "on");
hmap3.put(5, "under");
try {
FileOutputStream fos = new FileOutputStream("hashser.txt");
ObjectOutputStream ous = new ObjectOutputStream(fos);
ous.writeObject(hmap3);
ous.close();
fos.close();
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
}
Output:
’’loadFactorI etc
If you just need an arbitary human readable form you could do something like this:
class Main {
public static void main(String[] args) throws FileNotFoundException {
HashMap<Integer, String> hmap = new HashMap<>();
hmap.put(1, "to");
hmap.put(2, "in");
hmap.put(3, "at");
hmap.put(4, "on");
hmap.put(5, "under");
try (XMLEncoder stream = new XMLEncoder(new BufferedOutputStream(System.out))) {
stream.writeObject(hmap);
}
}
}
Yet, if it has to be CSV, you have to use a third party library (e.g. https://poi.apache.org/) , or implement it on your own. Also, if your data, is just a list of continues integers mapping to values, consider using a List instead of a HashMap.

how to write java csv parser using opencsv

I have to parse csv file .
number of columns would be variable.
I have written following code for fixed columns.
I have used csvtobean and MappingStrategy apis for parsing.
Please help me how can I create mappings dynamically.
public class OpencsvExecutor2 {
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
CsvToBean csv = new CsvToBean();
String csvFilename="C:\\Users\\ersvvwa\\Desktop\\taks\\supercsv\\20160511-0750--MaS_GsmrRel\\20160511-0750--MaS_GsmrRel.txt";
CSVReader csvReader = null;
List objList=new ArrayList<DataBean>();
try {
FileInputStream fis = new FileInputStream(csvFilename);
BufferedReader myInput = new BufferedReader(new InputStreamReader(fis));
csvReader = new CSVReader(new InputStreamReader(new FileInputStream(csvFilename), "UTF-8"), ' ', '\'', 1);
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
csvReader.getRecordsRead();
//Set column mapping strategy
List<DataBean> list = csv.parse(setColumMapping(csvReader), csvReader);
for (Object object : list) {
DataBean obj = (DataBean) object;
// System.out.println(obj.Col1);
objList.add(obj);
}
csvReader.close();
System.out.println("list size "+list.size());
System.out.println("objList size "+objList.size());
String outFile="C:\\Users\\ersvvwa\\Desktop\\taks\\supercsv\\20160511-0750--MaS_GsmrRel\\20160511-0750--MaS_GsmrRel.csv";
try {
CSVWriter csvWriter = null;
csvWriter = new CSVWriter(new FileWriter(outFile),CSVWriter.DEFAULT_SEPARATOR,CSVWriter.NO_QUOTE_CHARACTER);
//csvWriter = new CSVWriter(out,CSVWriter.DEFAULT_SEPARATOR,CSVWriter.NO_QUOTE_CHARACTER);
String[] columns = new String[] {"col1","col2","col3","col4"};
// Writer w= new FileWriter(out);
BeanToCsv bc = new BeanToCsv();
List ls;
csvWriter.writeNext(columns);
//bc.write(setColumMapping(), csvWriter, objList);
System.out.println("complete");
csvWriter.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private static MappingStrategy setColumMapping(CSVReader csvReader) throws IOException {
// TODO Auto-generated method stub
ColumnPositionMappingStrategy strategy = new ColumnPositionMappingStrategy();
strategy.setType(DataBean2.class);
String[] columns = new String[] {"col1","col2","col3","col4"};
strategy.setColumnMapping(columns);
return strategy;
}
}
If I understood correctly, you can read the file line by line and use split.
Example READ CSV: Example extracted from mkyong
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
public class ReadCVS {
public static void main(String[] args) {
ReadCVS obj = new ReadCVS();
obj.run();
}
public void run() {
String csvFile = "/Users/mkyong/Downloads/GeoIPCountryWhois.csv";
BufferedReader br = null;
String line = "";
String cvsSplitBy = ",";
try {
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
// use comma as separator
String[] country = line.split(cvsSplitBy);
System.out.println("Country [code= " + country[4]
+ " , name=" + country[5] + "]");
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
System.out.println("Done");
}
}
Example for WRITE a CSV file: Example extracted from mkyong
import java.io.FileWriter;
import java.io.IOException;
public class GenerateCsv
{
public static void main(String [] args)
{
generateCsvFile("c:\\test.csv");
}
private static void generateCsvFile(String sFileName)
{
try
{
FileWriter writer = new FileWriter(sFileName);
writer.append("DisplayName");
writer.append(',');
writer.append("Age");
writer.append('\n');
writer.append("MKYONG");
writer.append(',');
writer.append("26");
writer.append('\n');
writer.append("YOUR NAME");
writer.append(',');
writer.append("29");
writer.append('\n');
//generate whatever data you want
writer.flush();
writer.close();
}
catch(IOException e)
{
e.printStackTrace();
}
}
}
However, I would recommend to use a library. There are many (e.g., opencsv, Apache Commons CSV, Jackson Dataformat CSV, etc). You don't have to re-invent the wheel.
OPENCSV website has a lot of example that you can use.
If you Google "opencsv read example" you will get a lot of examples using the OPENCSV library (e.g., "Parse / Read / write CSV files : OpenCSV tutorial")
Hopefully this would help you!.
Assuming that your code works, I would try to use Generics for the setColumnMapping method.
The method setType gets a parameter "Class type". Use this as a parameter for your own method setColumnMapping e.g., (CSVReader csvReader, Class type). This way you can pass the DataBean2.class to the method, or any other class. Furthermore you need a variable column to bean mapping, because {"col1","col2","col3","col4"} is not sufficient for every bean, as you know. Think about how you can make this dynamic (you can pass a String[] to the setColumnMethod for example).
You also need to adjust List usage inside your main apparently.
I suggest looking for a brief tutorial on java generics before you start programming.
Finally i was able to parse csv and write it in desired format like
csvWriter = new CSVWriter(new FileWriter(outFile),CSVWriter.DEFAULT_SEPARATOR,CSVWriter.NO_QUOTE_CHARACTER);
csvReader = new CSVReader(new InputStreamReader(new FileInputStream(csvFilename), "UTF-8"), ' ');
String header = "NW,MSC,BSC,CELL,CELL_0";
List<String> headerList = new ArrayList<String>();
headerList.add(header);
csvWriter.writeNext(headerList.toArray(new String[headerList.size()]));
while ((nextLine = csvReader.readNext()) != null) {
// nextLine[] is an array of values from the line
for(int j=0;j< nextLine.length;j++){
// System.out.println("next " +nextLine[1]+" "+nextLine [2]+ " "+nextLine [2]);
if(nextLine[j].contains("cell")||
nextLine[j].equalsIgnoreCase("NW") ||
nextLine[j].equalsIgnoreCase("MSC") ||
nextLine[j].equalsIgnoreCase("BSC") ||
nextLine[j].equalsIgnoreCase("CELL")){
hm.put(nextLine[j], j);
}
}
break;
}
String[] out=null;
while ((row = csvReader.readNext()) != null) {
String [] arr=new String[4];
outList = new ArrayList<>();
innerList = new ArrayList<>();
finalList=new ArrayList<String[]>();
String[] str=null;
int x=4;
for(int y=0; y<hm.size()-10;y++){
if(!row[x].equalsIgnoreCase("NULL")|| !row[x].equals(" ")){
System.out.println("x "+x);
str=new String[]{row[0],row[1],row[2],row[3],row[x]};
}
finalList.add(str);;
x=x+3;
}
csvWriter.writeAll(finalList);
break;
}
csvReader.close();
csvWriter.close();
}

Ordering keys in .properties file alphabetically while ignoring case sensitivity

I have this function that stores values from a .properties file into a tree map (translatedMap), then retrieves new values from "keyMap" and stores them into "translatedMap" as well. The issue is no matter what I do it seems to always separate capitalized keys from non-capitalized keys. Here is my code:
Properties translation = new Properties(){
private static final long serialVersionUID = 1L;
#Override
public synchronized Enumeration<Object> keys() {
return Collections.enumeration(new TreeSet<Object>(super
.keySet()));
}
};
//creates file and stores values of keyMap into the file
try {
TreeMap<String, String> translatedMap = new TreeMap<String, String>(String.CASE_INSENSITIVE_ORDER);
InputStreamReader in = new InputStreamReader(new FileInputStream(filePath), "UTF-8");
translation.load(in);
// Store all values to TreeMap and sort
Enumeration<?> e = translation.propertyNames();
while (e.hasMoreElements()) {
String key = (String) e.nextElement();
if (key.matches(".#")) {
} else {
String value = translation.getProperty(key);
translatedMap.put(key, value);
}
}
// Add new values to translatedMap
for (String key : keyMap.keySet()) {
// Handle if some keys have already been added; delete so they can be re-added
if (translatedMap.containsKey(key)) {
translatedMap.remove(key);
}
translatedMap.put(key, keyMap.get(key));
}
in.close();
translation.putAll(translatedMap);
File translationFile = new File(filePath);
OutputStreamWriter out = new OutputStreamWriter(new FileOutputStream(translationFile, false), "UTF-8");
translation.store(out, null);
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
The output I'm getting is something like:
CAPITALIZED_KEY1=value1 CAPITALIZED_KEY2=value2
alowercase.key=value3 anotherlowercase.key=value4
morelowercase.keys=value5
When I would want it to come out like:
alowercase.key=value3 anotherlowercase.key=value4
CAPITALIZED_KEY1=value1 CAPITALIZED_KEY2=value2
morelowercase.keys=value5
Properties are not ordered. It doesn't matter what order you insert into them or if you call putAll() with something that is sorted, they extend Hashtable. See here.
The basic problem is that - though sorted case-insensitive -, an ordered map should still be case-sensitive as property names are case-sensitive.
Hence overide Properties, and on writing sort the names case-insensitive.
public class SortedProperties extends Properties {
#Override
public void store(Writer writer, String comments)
throws IOException {
List<String> names = new ArrayList<>();
for (Enumeration<?> en = propertyNames(); en.hasMoreElements(); ) {
String name = en.nextElement().toString();
names.add(name);
}
Collections.sort(names, new Comparator<String>() {
#Override
public int compareTo(String other) {
toLowerCase().compareTo(other.toLowerCase());
}
});
//... write all properties
}
To achieve this I ended up avoiding the store function all together. I did the sorting inside the treeMap. I used a buffered writter and wrote to the file. like this:
Properties translation = new Properties();
//creates file and stores values of keyMap into the file
try {
TreeMap<String, String> translatedMap = new TreeMap<String, String>(new Comparator<String>() {
public int compare(String o1, String o2) {
return o1.toLowerCase().compareTo(o2.toLowerCase());
}
});
InputStreamReader in = new InputStreamReader(new FileInputStream(filePath), "UTF-8");
translation.load(in);
// Store all values to TreeMap and sort
for (String key : translation.stringPropertyNames()) {
keyMap.put(key, translation.getProperty(key));
}
in.close();
Iterator<String> it = keyMap.keySet().iterator();
while (it.hasNext()) {
String key = it.next();
translatedMap.put(key, keyMap.get(key));
}
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(filePath, false), "UTF-8"));
bw.write("#" + new Date().toString());
bw.newLine();
Iterator<String> it2 = translatedMap.keySet().iterator();
while (it2.hasNext()) {
String key = it2.next();
bw.write(key + '=' + translatedMap.get(key));
bw.newLine();
}
bw.close();
} catch (IOException e) {
e.printStackTrace();
}
}

Categories

Resources