How to generate list of map into csv? - java

How to generate list of map into csv?
I try to get the key as headers and values as columns but, I get a list of values in the same key.
sample output

To generate a CSV file from a list of maps in Java, you can use a combination of the StringJoiner class and the FileWriter class.
StringJoiner sj = new StringJoiner(",");
for (Map<String, String> map : listOfMaps) {
for (Map.Entry<String, String> entry : map.entrySet()) {
sj.add(entry.getKey() + "=" + entry.getValue());
}
}
FileWriter fw = new FileWriter("output.csv");
fw.write(sj.toString());
fw.flush();
fw.close();

Related

Order By values column CSV Java

I have a csv file formed by two attributes: the first of type string and the second of type double.
Starting from this csv file, I would like to obtain another one, however, increasingly ordered based on the value of the second attribute. In SQL there was the ORDER BY function that allowed to order a database based on the specified attribute, I would like to get the same result as the ORDER BY.
Example input CSV file:
tricolor;14.0
career;9.0
salty;1020.0
looks;208.0
bought;110.0
Expected output CSV file:
career;9.0
tricolor;14.0
bought;110.0
looks;208.0
salty;1020.0
Read the CSV file into an List of Object[] (one Object[] per line in your CSV file)
First element of the array is the line itself (a String)
Second element of the array is the value of the double (a Double)
so you have the following list:
{
["tricolor;14.0", 14.0],
["career;9.0", 9.0],
["salty;1020.0", 1020.0],
["looks;208.0", 208.0],
["bought;110.0", 110.0]
}
Then sort it based on the value of the double
And you can then write it back to a CSV file (only writing the first element of each array)
List<Object[]> list = readFile("myFile.csv");
list.sort(Comparator.comparing(p -> (Double)p[1]));
// write to csv file, just printing it out here
list.forEach(p -> System.out.println(p[0]));
The method to read the file:
private static List<Object[]> readFile(String fileName) {
List<Object[]> list = new ArrayList<>();
try (BufferedReader br = new BufferedReader(new FileReader(fileName))) {
String line;
String[] splitLine;
while ((line = br.readLine()) != null) {
splitLine = line.split(";");
// add an array, first element is the line itself, second element is the double value
list.add(new Object[] {line, Double.valueOf(splitLine[1])});
}
} catch (IOException e) {
e.printStackTrace();
}
return list;
}
EDIT If you want reverse order:
Once you have your sorted list, you can reverse it using the convenient reverse method on the Collections class
Collections.reverse(list);
We can try the general approach of parsing the file into a sorted map (e.g. TreeMap), then iterating the map and writing back out to file.
TreeMap<String, Double> map = new TreeMap<String, Double>();
try (BufferedReader br = Files.newBufferedReader(Paths.get("yourfile.csv"))) {
String line;
while ((line = br.readLine()) != null) {
String[] parts = line.split(";");
map.put(parts[0], Double.parseDouble(parts[1]));
}
}
catch (IOException e) {
System.err.format("IOException: %s%n", e);
}
// now write the map to file, sorted ascending in alphabetical order
try (FileWriter writer = new FileWriter("yourfileout.csv");
BufferedWriter bw = new BufferedWriter(writer)) {
for (Map.Entry<String, Double> entry : map.entrySet()) {
bw.write(entry.getKey() + ";" + entry.getValue());
}
}
catch (IOException e) {
System.err.format("IOException: %s%n", e);
}
Notes:
I assume that the string values in the first column would always be unique. If there could be duplicates, the above script would have to be modified to use a map of lists, or something along those lines.
I also assume that the string values would all be lowercase. If not, then you might not get the sorting you expect. One solution, should this be a problem, would be to lowercase (or uppercase) every string before inserting that key into the map.
get comma separated values into the LinkedHashMap
TreeMap<String, Double> map = new LinkedHashMap<String, Double>();
try (BufferedReader br = Files.newBufferedReader(Paths.get("yourfile.csv"))) {
String line;
while ((line = br.readLine()) != null) {
String[] parts = line.split(";");
map.put(parts[0], Double.parseDouble(parts[1]));
}
}
catch (IOException e) {
System.err.format("IOException: %s%n", e);
}
then sort the map based on double values.
try with java 8,
LinkedHashMap<String, Double> sortedMap;
sortedMap = map.entrySet().stream().sorted(Entry.comparingByValue()).collect(Collectors.toMap(Entry::getKey, Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new));

Parsing text string using delimeter

I have a text file:
Structure:
key1: xxx
key2:
key3:
key4:
key5:value5
key6: This is an example text file to add data
this is an example text file
this is an example text file
this is an example text file
key7:
I have tried to parse it, but finding it difficult to split using delimeter ':' and to add into maps so that i can access the values based on keys. I have tried below code.The problem is key6 where there is a paragraph and the code tries to split using delimeter after every new line. Any help to deal with this issue is much appreciated.
try{
Map<Object, Object> map = new Properties();
BufferedReader br = new BufferedReader
(new FileReader(textString));
String line = "";
while ((line = br.readLine()) != null) {
String fields[] = line.split(":");
map.put(fields[0], fields[1]);
}
br.close();
}catch(Exception e){
LOGGER.debug("Exception", e);
}
Because you are doing the split after reading each line. Why not parse the whole text, then perform the split. Like this
try{
Map<Object, Object> map = new Properties();
BufferedReader br = new BufferedReader
(new FileReader(textString));
StringBuffer text = new StringBuffer();
while (br.readLine() != null) {
text.append(br.readLine());
}
br.close();
String fields[] = text.toString().split(":");
for(int i=0; i < fields.length-1; i++){
map.put(fields[0], fields[1]);
}
}catch(Exception e){
LOGGER.debug("Exception", e);
}
NOTE
If any of the values contain a colon it will break your data. With this solution or what you are currently doing. Ideally, if you could, would be to use the Properties class part of the java.
If you want to use it as a key value and store it into map. Then do not use : use = . For next line value you can use \ symbol it will pick up the value from next line.
Properties File:
key1= xxx
key2=
key3=
key4=
key5=value5
key6= This is an example text file to add data \
this is an example text file \
this is an example text file \
this is an example text file
key7=
How to fetch value:
public static void main(String[] args) throws IOException
{
Map<String, String> map = new HashMap<String, String>();
Properties properties = new Properties();
properties.load(Main.class.getResourceAsStream("prop.properties"));
for (final Entry<Object, Object> entry : properties.entrySet()) {
map.put( (String)entry.getKey(), (String) entry.getValue());
}
for (Iterator iterator = map.entrySet().iterator(); iterator.hasNext();) {
Entry type = (Entry) iterator.next();
System.out.println(type.getKey());
System.out.println(type.getValue());
}
}
Output:
key1
xxx
key2
key5
value5
key6
This is an example text file to add data this is an example text file this is an example text file this is an example text file
key3
key4
key7

Java - Write hashmap to a csv file

I have a hashmap with a String key and String value. It contains a large number of keys and their respective values.
For example:
key | value
abc | aabbcc
def | ddeeff
I would like to write this hashmap to a csv file such that my csv file contains rows as below:
abc,aabbcc
def,ddeeff
I tried the following example here using the supercsv library: http://javafascination.blogspot.com/2009/07/csv-write-using-java.html. However, in this example, you have to create a hashmap for each row that you want to add to your csv file. I have a large number of key value pairs which means that several hashmaps, with each containing data for one row need to be created. I would like to know if there is a more optimized approach that can be used for this use case.
Using the Jackson API, Map or List of Map could be written in CSV file. See complete example here
/**
* #param listOfMap
* #param writer
* #throws IOException
*/
public static void csvWriter(List<HashMap<String, String>> listOfMap, Writer writer) throws IOException {
CsvSchema schema = null;
CsvSchema.Builder schemaBuilder = CsvSchema.builder();
if (listOfMap != null && !listOfMap.isEmpty()) {
for (String col : listOfMap.get(0).keySet()) {
schemaBuilder.addColumn(col);
}
schema = schemaBuilder.build().withLineSeparator(System.lineSeparator()).withHeader();
}
CsvMapper mapper = new CsvMapper();
mapper.writer(schema).writeValues(writer).writeAll(listOfMap);
writer.flush();
}
Something like this should do the trick:
String eol = System.getProperty("line.separator");
try (Writer writer = new FileWriter("somefile.csv")) {
for (Map.Entry<String, String> entry : myHashMap.entrySet()) {
writer.append(entry.getKey())
.append(',')
.append(entry.getValue())
.append(eol);
}
} catch (IOException ex) {
ex.printStackTrace(System.err);
}
As your question is asking how to do this using Super CSV, I thought I'd chime in (as a maintainer of the project).
I initially thought you could just iterate over the map's entry set using CsvBeanWriter and a name mapping array of "key", "value", but this doesn't work because HashMap's internal implementation doesn't allow reflection to get the key/value.
So your only option is to use CsvListWriter as follows. At least this way you don't have to worry about escaping CSV (every other example here just joins with commas...aaarrggh!):
#Test
public void writeHashMapToCsv() throws Exception {
Map<String, String> map = new HashMap<>();
map.put("abc", "aabbcc");
map.put("def", "ddeeff");
StringWriter output = new StringWriter();
try (ICsvListWriter listWriter = new CsvListWriter(output,
CsvPreference.STANDARD_PREFERENCE)){
for (Map.Entry<String, String> entry : map.entrySet()){
listWriter.write(entry.getKey(), entry.getValue());
}
}
System.out.println(output);
}
Output:
abc,aabbcc
def,ddeeff
Map<String, String> csvMap = new TreeMap<>();
csvMap.put("Hotel Name", hotelDetails.getHotelName());
csvMap.put("Hotel Classification", hotelDetails.getClassOfHotel());
csvMap.put("Number of Rooms", hotelDetails.getNumberOfRooms());
csvMap.put("Hotel Address", hotelDetails.getAddress());
// specified by filepath
File file = new File(fileLocation + hotelDetails.getHotelName() + ".csv");
// create FileWriter object with file as parameter
FileWriter outputfile = new FileWriter(file);
String[] header = csvMap.keySet().toArray(new String[csvMap.size()]);
String[] dataSet = csvMap.values().toArray(new String[csvMap.size()]);
// create CSVWriter object filewriter object as parameter
CSVWriter writer = new CSVWriter(outputfile);
// adding data to csv
writer.writeNext(header);
writer.writeNext(dataSet);
// closing writer connection
writer.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
If you have a single hashmap it is just a few lines of code. Something like this:
Map<String,String> myMap = new HashMap<>();
myMap.put("foo", "bar");
myMap.put("baz", "foobar");
StringBuilder builder = new StringBuilder();
for (Map.Entry<String, String> kvp : myMap.entrySet()) {
builder.append(kvp.getKey());
builder.append(",");
builder.append(kvp.getValue());
builder.append("\r\n");
}
String content = builder.toString().trim();
System.out.println(content);
//use your prefered method to write content to a file - for example Apache FileUtils.writeStringToFile(...) instead of syso.
result would be
foo,bar
baz,foobar
My Java is a little limited but couldn't you just loop over the HashMap and add each entry to a string?
// m = your HashMap
StringBuilder builder = new StringBuilder();
for(Entry<String, String> e : m.entrySet())
{
String key = e.getKey();
String value = e.getValue();
builder.append(key);
builder.append(',');
builder.append(value);
builder.append(System.getProperty("line.separator"));
}
string result = builder.toString();

Save and Load nested Map into text file using Java

I have a nested MAP. I want to save these Map in a text file and then use this file in another projects. I can save outerMap correctly, however I need an efficient code to load this file into same Maps (I use '#' to separating key of outerMap and its innerMap).
Map <String, Map <String,Double>> outerMap= new HashMap<>();
Map <String,Double> innerMap= new HashMap<>();
.
.
.
PrintWriter writer = new PrintWriter("e:\\t.txt", "UTF-8");
Iterator it = outerMap.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pairs = (Map.Entry)it.next();
writer.println(pairs.getKey() + "#" + pairs.getValue());
}
writer.close();
On your question in comment:
Gson gson = new Gson();
Map<String, Map<String, Double>> outerMap = new HashMap<>();
Map<String, Double> innerMap = new HashMap<>();
innerMap.put("1", 1.0);
innerMap.put("2", 2.0);
outerMap.put("key1", innerMap);
String json = gson.toJson(outerMap);
Path path = FileSystems.getDefault().getPath("", "myfile.txt");
Files.write(path, json.getBytes("UTF-8"), StandardOpenOption.CREATE, StandardOpenOption.WRITE, StandardOpenOption.TRUNCATE_EXISTING);
json = new String(Files.readAllBytes(path));
outerMap = gson.fromJson(json, new TypeToken<Map<String, Map<String, Double>>>(){}.getType());
for (Map.Entry<String, Map<String, Double>> outerEntry: outerMap.entrySet()) {
System.out.println(outerEntry.getKey());
innerMap = outerEntry.getValue();
for (Map.Entry<String, Double> innerEntry: innerMap.entrySet()) {
System.out.println(" " + innerEntry.getKey() + "->" + innerEntry.getValue());
}
}
Output:
key1
2->2.0
1->1.0
You need use Gson library.
If you don't need human readable content in text-file, you can use approach, suggested in the first comment:
Map<String, Map<String, Double>> outerMap = new HashMap<>();
Map<String, Double> innerMap = new HashMap<>();
innerMap.put("1", 1.0);
innerMap.put("2", 2.0);
outerMap.put("key1", innerMap);
// write to file
try (ObjectOutput objectOutputStream = new ObjectOutputStream(new BufferedOutputStream(new FileOutputStream("myfile2.txt", false)))) {
objectOutputStream.writeObject(outerMap);
} catch (Throwable cause) {
cause.printStackTrace();
}
// read from file
try (ObjectInput objectInputStream = new ObjectInputStream(new BufferedInputStream(new FileInputStream("myfile2.txt")))) {
outerMap = (Map<String, Map<String, Double>>) objectInputStream.readObject();
} catch (Throwable cause) {
cause.printStackTrace();
}
You will have your Map on 'outerMap' reference.
This sounds pretty straightforward to me.
(Assuming you want to load the values of the file into innerMap)
FileInputStream inputStream = new FileInputStream("e:\\t.txt");
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));
String buffer;
while((buffer = reader.readLine()) != null){
String[] pairs = buffer.split("#");
innerMap.put(pairs[0],Double.parseDouble(pairs[1]));
}
Pepper it with safety checks where you deem necessary.
NOTE : Sample code is written blindly and might be subject to careless mistakes.

How to check number of instances of a domain in a text file

I have a text file containing domains like
ABC.COM
ABC.COM
DEF.COM
DEF.COM
XYZ.COM
i want to read the domains from the text file and check how many instances of domains are there.
Reading from a text file is easy but i am confused at how to check number of instances of domains.
Please help.
Split by space (String instances have method split), iterate through result array and use Map<String(domainName), Integer(count)> - when domain is in map, than increase count in map by 1, when not - put domain name in map and set 1 as a value.
Better solution is to use a Map to map the words Map with frequency.
Map<String,Integer> frequency = new LinkedHashMap<String,Integer>();
Read file
BufferedReader in = new BufferedReader(new FileReader("infilename"));
String str;
while ((str = in.readLine()) != null) {
buildMap(str);
}
in.close();
Build map method : You can split the urls in your file by reading them line by line and splitting with delimiter(in your case space).
String [] words = line.split(" ");
for (String word:words){
Integer f = frequency.get(word);
if(f==null) f=0;
frequency.put(word,f+1);
}
Find out for a particular domain with:
frequency.get(domainName)
Ref: Counting frequency of a string
List<String> domains=new ArrayList<String>(); // values from your file
domains.add("abc.com");
domains.add("abc.com");
domains.add("xyz.com");
//added for example
Map<String,Integer> domainCount=new HashMap<String, Integer>();
for(String domain:domains){
if(domainCount.containsKey(domain)){
domainCount.put(domain, domainCount.get(domain)+1);
}else
domainCount.put(domain, new Integer(1));
}
Set<Entry<String, Integer>> entrySet = domainCount.entrySet();
for (Entry<String, Integer> entry : entrySet) {
System.out.println(entry.getKey()+" : "+entry.getValue());
}
If the domains are unknown you can do something like:
// Field Declaration
private Map<String, Integer> mappedDomain = new LinkedHashMap<String, Integer>();
private static final List<String> domainList = new ArrayList<String>();
// Add all that you want to track
domainList.add("com");
domainList.add("net");
domainList.add("org");
...
// Inside the loop where you do a readLine
String[] words = line.split(" ");
for (String word : words) {
String[] wordSplit = word.split(".");
if (wordSplit.length == 2) {
for (String domainCheck : domainList) {
if (domainCheck.equals(wordSplit[1])) {
if (mappedDomain.containsKey(word)) {
mappedDomain.put(word, mappedDomain.get(word)+1);
} else {
mappedDomain.put(word, 1);
}
}
}
}
}
Note: This will work for something like xxx.xxx; if you need to take care of complex formats you need to modify the logic from wordSplit!

Categories

Resources