Java: check CSV file on duplicate lines using ArrayList - java

I have a CSV file with this content:
2017-10-29 00:00:00.0,"1005",-10227,0,0,0,332894,0,0,222,332894,222,332894 2017-10-29 00:00:00.0,"1010",-125529,0,0,0,420743,0,0,256,420743,256,420743 2017-10-29 00:00:00.0,"1005",-10227,0,0,0,332894,0,0,222,332894,222,332894 2017-10-29 00:00:00.0,"1013",-10625,0,0,-687,599098,0,0,379,599098,379,599098 2017-10-29 00:00:00.0,"1604",-1794.9,0,0,-3.99,4081.07,0,0,361,4081.07,361,4081.07
So lines 1 and 3 are duplicates.
Now I want to read the file in and print out duplicate lines in the console.
I set up this Java code reading the file in and throwing it line by line into an ArrayList. Then I create an immutable
copy, loop through the ArrayList and in the binarySearch I use the immutable copy of the ArrayList:
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class ReadValidationFile {
public static void main(String[] args) {
List<String> validationFile = new ArrayList<>();
try(BufferedReader br = new BufferedReader(new FileReader("validation_small.csv"));){
String line;
while((line = br.readLine())!= null){
validationFile.add(line);
}
} catch (FileNotFoundException e) {
//e.printStackTrace();
System.out.println("file not found " + e.getMessage());
} catch (IOException e) {
e.printStackTrace();
}
List<String> validationFileCopy = Collections.unmodifiableList(validationFile);
for(String line : validationFile){
int comp = Collections.binarySearch(validationFileCopy,line,new ComparatorLine());
if (comp <= 0){
System.out.println(line);
}
}
}
}
Comparator Class:
import java.util.Comparator;
public class ComparatorLine implements Comparator<String> {
#Override
public int compare(String s1, String s2) {
return s1.compareToIgnoreCase(s2);
}
}
I expect this line to be printed:
2017-10-29 00:00:00.0,"1005",-10227,0,0,0,332894,0,0,222,332894,222,332894
But the output I get is this:
2017-10-29 00:00:00.0,"1010",-125529,0,0,0,420743,0,0,256,420743,256,420743
Can you help me please to see what I am doing wrong? My comparator I think is okay. What is wrong with my
ArrayLists?

The other answer(s) correctly state that you should be using Set instead of List. But for the sake of learning, let's have a look at your code and see where you went wrong.
public class ReadValidationFile {
public static void main(String[] args) {
List<String> validationFile = new ArrayList<>();
try(BufferedReader br = new BufferedReader(new FileReader("validation_small.csv"));){
Semicolon is unnecessary.
String line;
while((line = br.readLine())!= null){
validationFile.add(line);
}
This can all be achieved in just one line: List<String> validationFile = Files.readAllLines(Paths.get("validation_small.csv"), "utf-8");
} catch (FileNotFoundException e) {
//e.printStackTrace();
System.out.println("file not found " + e.getMessage());
} catch (IOException e) {
e.printStackTrace();
}
List<String> validationFileCopy = Collections.unmodifiableList(validationFile);
Actually, this is not a copy. It is just an unmodifiable view of the same list.
for(String line : validationFile){
int comp = Collections.binarySearch(validationFileCopy,line,new ComparatorLine());
You might as well just search validationFile itself. However, you are calling binarySearch which only works on sorted lists, but your list is not sorted. See documentation.
if (comp <= 0){
System.out.println(line);
}
You are printing when it's not found (comp <= 0). If the search succeeds, it will return a non-negative number (comp >= 0). But another problem is that you are searching the whole list for each element, and the search will obviously always succeed (that is, if your list was sorted).
Save yourself all the trouble and use a Set instead. And, using Java 8 streams, the whole program can be reduced to the following:
public static void main(String[] args) throws Exception {
Set<String> uniqueLines = new HashSet<>();
Files.lines(Paths.get("", "utf-8"))
.filter(line -> !uniqueLines.add(line))
.forEach(System.out::println);
}
If you really need to ignore case when comparing strings (from your given data, it looks like it doesn't make any difference since it's just numbers), then store each unique line by first uppercasing and then lowercasing it. This apparently cumbersome technique is necessary because just lowercasing is not enough if dealing with non-English language text. The equalsIgnoreCase method also does this.
public static void main(String[] args) throws Exception {
Set<String> uniqueLines = new HashSet<>();
Files.lines(Paths.get("", "utf-8"))
.filter(line -> !uniqueLines.add(line.toUpperCase().toLowerCase()))
.forEach(System.out::println);
}

Create a Set while reading lines from the input csv file, anytime add() element to set returns false print the line as it is duplicate line.
If you want list of all duplicate lines then create a List which will have lines that returned false when tried add() to Set.
NOTE:
I have simulated your file reading by using a static data.
Small note, if your data only contains numbers and no alphabets then you do not need case-insensitive comparison.
If your data contains alphabets then also you do not need a special Comparator as you can insert data into Set using add(line.toLowerCase()) which will ensure that all lines are compared with lower case and then added to Set.
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.stream.Collectors;
public class ReadValidationFile {
static List<String> validationFile = new ArrayList<>();
static {
validationFile.add("2017-10-29 00:00:00.0,\"1005\",-10227,0,0,0,332894,0,0,222,332894,222,332894");
validationFile.add("2017-10-29 00:00:00.0,\"1010\",-125529,0,0,0,420743,0,0,256,420743,256,420743");
validationFile.add("2017-10-29 00:00:00.0,\"1005\",-10227,0,0,0,332894,0,0,222,332894,222,332894");
validationFile.add("2017-10-29 00:00:00.0,\"1013\",-10625,0,0,-687,599098,0,0,379,599098,379,599098");
validationFile.add("2017-10-29 00:00:00.0,\"1604\",-1794.9,0,0,-3.99,4081.07,0,0,361,4081.07,361,4081.07");
}
public static void main(String[] args) {
// Option 1 : unique lines only
Set<String> uniqueLinesOnly = new HashSet<>(validationFile);
// Option 2 : unique lines and duplicate lines
Set<String> uniqueLines = new HashSet<>();
Set<String> duplicateLines = new HashSet<>();
for (String line : validationFile) {
if (!uniqueLines.add(line.toLowerCase())) {
duplicateLines.add(line.toLowerCase());
}
}
// Option 3 : unique lines and duplicate lines by Java Streams
Set<String> uniquesJava8 = new HashSet<>();
List<String> duplicatesJava8 = validationFile
.stream()
.filter(element -> !uniquesJava8.add(element.toLowerCase()))
.map(element -> element.toLowerCase())
.collect(Collectors.toList());
}
}

import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.stream.Collectors;
public class ReadValidationFile {
public static void main(String[] args){
List<String> validationFile = new ArrayList<>();
try(BufferedReader br = new BufferedReader(new FileReader("validation_small.csv"));){
String line;
while((line = br.readLine())!= null){
validationFile.add(line);
}
} catch (FileNotFoundException e) {
//e.printStackTrace();
System.out.println("file not found " + e.getMessage());
} catch (IOException e) {
e.printStackTrace();
}
Set<String> uniques = new HashSet<>();
List<String> duplicates = validationFile.stream().filter(i->!uniques.add(i)).collect(Collectors.toList());
System.out.println(duplicates);
}
}

Related

Duplicate word frequencies issues in Java [duplicate]

This question already has an answer here:
Duplicate word frequencies problem in text file in Java [closed]
(1 answer)
Closed 1 year ago.
[I am new to Java and Stackoverflow. My last question was closed. I have added a complete code this time. thanks] I have a large txt file of 4GB (vocab.txt). It contains plain Bangla(unicode) words. Each word is in newline with its frequency(equal sign in between). Such as,
আমার=5
তুমি=3
সে=4
আমার=3 //duplicate of 1st word of with different frequency
করিম=8
সে=7 //duplicate of 3rd word of with different frequency
As you can see, it has same words multiple times with different frequencies. How to keep only a single word (instead of multiple duplicates) and with summation of all frequencies of the duplicate words. Such as, the file above would be like (output.txt),
আমার=8 //5+3
তুমি=3
সে=11 //4+7
করিম=8
I have used HashMap to solve the problem. But I think I made some mistakes somewhere. It runs and shows the exact data to output file without changing anything.
package data_correction;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FileReader;
import java.io.OutputStreamWriter;
import java.util.*;
import java.awt.Toolkit;
public class Main {
public static void main(String args[]) throws Exception {
FileInputStream inputStream = null;
Scanner sc = null;
String path="C:\\DATA\\vocab.txt";
FileOutputStream fos = new FileOutputStream("C:\\DATA\\output.txt",true);
BufferedWriter bufferedWriter = new BufferedWriter(
new OutputStreamWriter(fos,"UTF-8"));
try {
System.out.println("Started!!");
inputStream = new FileInputStream(path);
sc = new Scanner(inputStream, "UTF-8");
while (sc.hasNextLine()) {
String line = sc.nextLine();
line = line.trim();
String [] arr = line.split("=");
Map<String, Integer> map = new HashMap<>();
if (!map.containsKey(arr[0])){
map.put(arr[0],Integer.parseInt(arr[1]));
}
else{
map.put(arr[0], map.get(arr[0]) + Integer.parseInt(arr[1]));
}
for(Map.Entry<String, Integer> each : map.entrySet()){
bufferedWriter.write(each.getKey()+"="+each.getValue()+"\n");
}
}
bufferedWriter.close();
if (sc.ioException() != null) {
throw sc.ioException();
}
} finally {
if (inputStream != null) {
inputStream.close();
}
if (sc != null) {
sc.close();
}
}
System.out.print("FINISH");
Toolkit.getDefaultToolkit().beep();
}
}
Thanks for your time.
This should do what you want with some mor eJava magic:
public static void main(String[] args) throws Exception {
String separator = "=";
Map<String, Integer> map = new HashMap<>();
try (Stream<String> vocabs = Files.lines(new File("test.txt").toPath(), StandardCharsets.UTF_8)) {
vocabs.forEach(
vocab -> {
String[] pair = vocab.split(separator);
int value = Integer.valueOf(pair[1]);
String key = pair[0];
if (map.containsKey(key)) {
map.put(key, map.get(key) + value);
} else {
map.put(key, value);
}
}
);
}
System.out.println(map);
}
For test.txt take the correct file path. Pay attention that the map is kept in memory, so this is maybe not the best approach. If necessary replace the map with a e.g. database backed approach.

Convert ArrayList to TreeMap

I have a little problem of understanding, I will put the code here and try to explain my problem.
I have a first class, ReadSymptomFromDataFile :
package com.hemebiotech.analytics;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
/**
* Simple brute force implementation
*
*/
public class ReadSymptomDataFromFile implements ISymptomReader {
private final String filepath;
/**
*
* #param filepath a full or partial path to file with symptom strings in it, one per line
*/
public ReadSymptomDataFromFile (String filepath) {
this.filepath = filepath;
}
#Override
public List<String> getSymptoms () {
ArrayList<String> result = new ArrayList<>();
if (filepath != null) {
try {
BufferedReader reader = new BufferedReader (new FileReader(filepath));
String line = reader.readLine();
while (line != null) {
result.add(line);
line = reader.readLine();
}
reader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return result;
}
}
This class is used to read a txt file which contains a list of symptoms, with several times the same symptoms inside, hence the value of the TreeMap, a symptom associated with the number of times it appears. (Value, Key)
So far so good.
Then I have this code that I made myself but it happens from the class ReadSymptomData :
package com.hemebiotech.analytics.Test;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.*;
public class MainAppTest2 {
public static void main (String[] args) {
try {
File file = new File ("Project02Eclipse\\symptoms.txt");
Scanner scan = new Scanner (file);
Map<String, Integer> wordCount = new TreeMap<> ();
while (scan.hasNext ()) {
String word = scan.next ();
if (!wordCount.containsKey (word)) {
wordCount.put (word, 1);
} else {
wordCount.put (word, wordCount.get (word) + 1);
}
}
// Result in console & Write file output
FileWriter writer = new FileWriter ("resultat2.out");
BufferedWriter out = new BufferedWriter (writer);
for (Map.Entry<String, Integer> entry : wordCount.entrySet ()) {
System.out.println ("Valeur: " + entry.getKey () + "| Occurence: " + entry.getValue ());
out.write (entry.getKey () + " = " + entry.getValue () + " \n");
out.flush (); // Force write
}
} catch (IOException e) {
System.out.println ("Fichier introuvable");
}
}
}
This code does much the same thing, it reads a txt file, saves it in a TreeMap, displays it on the console and saves it in a resultat file.
Now my problem is that I am trying to split my code into several classes while using the already existing class ReadSymptomData, one class to read the text file, another to convert it all to TreeMap, another class to write the results in an output file, and a final one for exception handling.
I started with this FileToTreeMap class, but it's ugly, it's not clean, and I'm sure it can be done better to convert my ReadSymptomDataFromFile object to a TreeMap:
package com.hemebiotech.analytics.Test.read;
import com.hemebiotech.analytics.ReadSymptomDataFromFile;
import java.util.*;
public class FileToTreeMap {
// Read file
public Map<String, Integer> readFile () {
ReadSymptomDataFromFile list = new ReadSymptomDataFromFile ("Project02Eclipse\\symptoms.txt");
Map<String, Integer> listSort = new TreeMap<> ();
ArrayList<String> test = new ArrayList<> (list.getSymptoms ());
Scanner scan = new Scanner (String.valueOf (test));
while (scan.hasNext ()) {
String word = scan.next ();
if (!listSort.containsKey (word)) {
listSort.put (word, 1);
} else {
listSort.put (word, listSort.get (word) + 1);
}
}
for (Map.Entry<String, Integer> entry : listSort.entrySet ()) {
System.out.println ("Valeur: " + entry.getKey () + " Occurence: " + entry.getValue ());
}
return listSort;
}
}
Here, I am a little lost in cutting my code, and the main problem I have is to convert my ArrayList to a TreeMap.
Sorry for the length of the post, but I would appreciate any help I get, thanks in advance.
To convert a list into a map, what you can do is use a for loop, which goes through the list and adds each item one by one:
ReadSymptomDataFromFile list = new ReadSymptomDataFromFile("Project02Eclipse\\symptoms.txt");
Map<String, Integer> listSort = new TreeMap<>();
List<String> test = list.getSymptoms();
for (String word : test) {
if (!listSort.containsKey(word)) {
listSort.put(word, 1);
} else {
listSort.put(word, listSort.get(word) + 1);
}
}
Try in Java 8
final List<String> symptomList;
final Map<String, Integer> countedSymptoms;
symptoms.forEach((symptom) ->
countedSymptoms.put(symptom, Collections.frequency(symptomList, symptom)));
You iterate through list and put the elements in the map. Since this is an arrayList duplicate elements would stay. But beforehand couldn't get why you have to read some data to a list and some to treeMap? You have already been loading your symptoms data to a treeMap. So the code stub is more like
for(String el : test) {
if(!listSort.containsKey(el))
listSort.put(el,1);
else
listSort.put(el, listSort.get(el)+1));
}
You could use the merge method of Map:
List<String> symptoms = List.of("a", "b", "c", "b", "c", "a", "b", "b", "c");
Map<String, Integer> counts = new TreeMap<>();
symptoms.forEach(s -> counts.merge(s, 1, Integer::sum));
counts.forEach((k, v) -> System.out.println(k + ": " + v));
This prints:
a: 2
b: 4
c: 3

Java StringTokenizer assistance

So, I have a separate program that requires a string of numbers in the format: final static private String INITIAL = "281043765"; (no spaces) This program works perfectly so far with the hard coded assignment of the numbers. I need to, instead of typing the numbers in the code, have the program read a txt file, and assign the numbers in the file to the INITIAL variable
To do this, I'm attempting to use StringTokenizer. My implementation outputs [7, 2, 4, 5, 0, 6, 8, 1, 3]. I need it to output the numbers without the "[]" or the "," or any spaces between each number. Making it look exactly like INITIAL. I'm aware I probably need to put the [] and , as delimiters but the original txt file is purely numbers and I don't believe that will help. Here is my code. (ignore all the comments please)
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.StringTokenizer;
public class test {
//public static String num;
static ArrayList<String> num;
//public static TextFileInput myFile;
public static StringTokenizer myTokens;
//public static String name;
//public static String[] names;
public static String line;
public static void main(String[] args) {
BufferedReader reader = null;
try {
File file = new File("test3_14.txt");
reader = new BufferedReader(new FileReader(file));
num = new ArrayList<String>();
while ((line = reader.readLine())!= null) {
myTokens = new StringTokenizer(line, " ,");
//num = new String[myTokens.countTokens()];
while (myTokens.hasMoreTokens()){
num.add(myTokens.nextToken());
}
}
System.out.println(num);
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
reader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
You are currently printing the default .toString() implementation of ArrayList. Try this instead:
for (String nbr : num) {
System.out.print(nbr)
}
To get rid of the brackets, you have to actually call each item in the ArrayList. Try this just below your System.out.println
for(String number : num)
System.out.print(number);
System.out.println("");
Can you also provide sample input data?
Try Replacing space and , with empty string
line = StringUtils.replaceAll(line, “,” , “”);
line = StringUtils.replaceAll(line, “ “, “”);
System.out.println(line);

Reading a csv file into a HashMap<String, ArrayList<Integer>>

I've been trying to make a java program in which a tab delimited csv file is read line by line and the first column (which is a string) is added as a key to a hash map and the second column (integer) is it's value.
In the input file, there are duplicate keys but with different values so I was going to add the value to the existing key to form an ArrayList of values.
I can't figure out the best way of doing this and was wondering if anyone could help?
Thanks
EDIT: sorry guys, heres where i've got to with the code so far:
I should add the first column is the value and the second column is the key.
public class WordNet {
private final HashMap<String, ArrayList<Integer>> words;
private final static String LEXICAL_UNITS_FILE = "wordnet_data/wn_s.csv";
public WordNet() throws FileNotFoundException, IOException {
words = new HashMap<>();
readLexicalUnitsFile();
}
private void readLexicalUnitsFile() throws FileNotFoundException, IOException{
BufferedReader in = new BufferedReader(new FileReader(LEXICAL_UNITS_FILE));
String line;
while ((line = in.readLine()) != null) {
String columns[] = line.split("\t");
if (!words.containsKey(columns[1])) {
words.put(columns[1], new ArrayList<>());
}
}
in.close();
}
You are close
String columns[] = line.split("\t");
if (!words.containsKey(columns[1])) {
words.put(columns[1], new ArrayList<>());
}
should be
String columns[] = line.split("\t");
String key = columns[0]; // enhance readability of code below
List<Integer> list = words.get(key); // try to fetch the list
if (list == null) // check if the key is defined
{ // if not
list = new ArrayList<>(); // create a new list
words.put(key,list); // and add it to the map
}
list.add(new Integer(columns[1])); // in either case, add the value to the list
In response to the OP's comment/question
... the final line just adds the integer to the list but not to the hashmap, does something need to be added after that?
After the statement
List<Integer> list = words.get(key);
there are two possibilities. If list is non-null, then it is a reference to (not a copy of) the list that is already in the map.
If list is null, then we know the map does not contain the given key. In that case we create a new empty list, set the variable list as a reference to the newly created list, and then add the list to the map for the key.
In either case, when we reach
list.add(new Integer(columns[1]));
the variable list contains a reference to an ArrayList that is already in the map, either the one that was there before, or one we just creatd and added. We just add the value to it.
I should add the first column is the value and the second column is the key.
You could remplace the ArrayList declaration by a List declaration. But it is not very problematic.
Anyway, not tested but the logic should be such as :
while ((line = in.readLine()) != null) {
String columns[] = line.split("\t");
ArrayList<Integer> valueForCurrentLine = words.get(columns[1]);
// you instantiate and put the arrayList once
if (valueForCurrentLine==null){
valueForCurrentLine = new ArrayList<Integer>();
words.put(columns[1],valueForCurrentLine);
}
valueForCurrentLine.add(columns[0]);
Upvote to Jim Garrison's answer above. Here's a little more... (Yes, you should check/mark his answer as the one that solved it)
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class WordNet {
private final Map<String, List<Integer>> words;
private final static String LEXICAL_UNITS_FILE = "src/net/bwillard/practice/code/wn_s.csv";
/**
*
* #throws FileNotFoundException
* #throws IOException
*/
public WordNet() throws FileNotFoundException, IOException {
words = new HashMap<>();
readLexicalUnitsFile();
}
/**
*
* #throws FileNotFoundException
* #throws IOException
*/
private void readLexicalUnitsFile() throws FileNotFoundException, IOException {
BufferedReader in = new BufferedReader(new FileReader(LEXICAL_UNITS_FILE));
String line;
while ((line = in.readLine()) != null) {
String columns[] = line.split("\t");
String key = columns[0];
int valueInt;
List<Integer> valueList;
try {
valueInt = Integer.parseInt(columns[1]);
} catch (NumberFormatException e) {
System.out.println(e);
continue;
}
if (words.containsKey(key)) {
valueList = words.get(key);
} else {
valueList = new ArrayList<>();
words.put(key, valueList);
}
valueList.add(valueInt);
}
in.close();
}
//You can test this file by running it as a standalone app....
public static void main(String[] args) {
try {
WordNet wn = new WordNet();
for (String k : wn.words.keySet()) {
System.out.println(k + " " + wn.words.get(k));
}
} catch (IOException e) {
e.printStackTrace();
}
}
}

How do you remove the first and the last characters from a Multimap's String representation?

I am trying to output the result of the a Multimap.get() to a file. However I get the [ and ] characters appear as the first and last character respectively.
I tried to use this program, but it doesn't print any separators between the integers. How can I solve this problem?
package main;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.PrintWriter;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Scanner;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.Maps;
import com.google.common.collect.Multimap;
public class App {
public static void main(String[] args) {
File file = new File("test.txt");
ArrayList<String> list = new ArrayList<String>();
Multimap<Integer, String> newSortedMap = ArrayListMultimap.create();
try {
Scanner s = new Scanner(file);
while (s.hasNext()) {
list.add(s.next());
}
s.close();
} catch (FileNotFoundException e) {
System.out.println("File cannot be found in root folder");
;
}
for (String word : list) {
int key = findKey.convertKey(word);
newSortedMap.put(key, word);
}
// Overwrites old output.txt
try {
PrintWriter writer = new PrintWriter("output.txt", "UTF-8");
for (Integer key: newSortedMap.keySet()) {
writer.println(newSortedMap.get(key));
}
writer.close();
} catch (FileNotFoundException e) {
System.out.println("FileNotFoundException e should not occur");
} catch (UnsupportedEncodingException e) {
System.out.println("UnsupportedEncodingException has occured");
}
}
You may assign newSortedMap.get(key).toString() to a variable, lets say stringList. Now call writer.println(stringList.substring(1,stringList.length()-1));
Understand that when you pass a list into writer.println method, it will invoke the toString() method of the object and write the output. The list.toString() method returns a string with all values separated by , and adds [ and ] to the beginning and end of the string.
Just modify the String itself using substring.
substring(int, int)
Returns a new string that is a substring of this string. The substring begins at the specified beginIndex and extends to the character at index endIndex - 1. Thus the length of the substring is endIndex-beginIndex.
So, convert the Map into a String. Then, 1 is the second character of the String representation, and use mapString.length() - 1 for the rest of it.
Here's some working code:
PrintWriter writer = new PrintWriter("output.txt", "UTF-8");
String mapString = newSortedMap.toString();
writer.println(mapString.substring(1, mapString.length() - 1);

Categories

Resources