Order By values column CSV Java - java

I have a csv file formed by two attributes: the first of type string and the second of type double.
Starting from this csv file, I would like to obtain another one, however, increasingly ordered based on the value of the second attribute. In SQL there was the ORDER BY function that allowed to order a database based on the specified attribute, I would like to get the same result as the ORDER BY.
Example input CSV file:
tricolor;14.0
career;9.0
salty;1020.0
looks;208.0
bought;110.0
Expected output CSV file:
career;9.0
tricolor;14.0
bought;110.0
looks;208.0
salty;1020.0

Read the CSV file into an List of Object[] (one Object[] per line in your CSV file)
First element of the array is the line itself (a String)
Second element of the array is the value of the double (a Double)
so you have the following list:
{
["tricolor;14.0", 14.0],
["career;9.0", 9.0],
["salty;1020.0", 1020.0],
["looks;208.0", 208.0],
["bought;110.0", 110.0]
}
Then sort it based on the value of the double
And you can then write it back to a CSV file (only writing the first element of each array)
List<Object[]> list = readFile("myFile.csv");
list.sort(Comparator.comparing(p -> (Double)p[1]));
// write to csv file, just printing it out here
list.forEach(p -> System.out.println(p[0]));
The method to read the file:
private static List<Object[]> readFile(String fileName) {
List<Object[]> list = new ArrayList<>();
try (BufferedReader br = new BufferedReader(new FileReader(fileName))) {
String line;
String[] splitLine;
while ((line = br.readLine()) != null) {
splitLine = line.split(";");
// add an array, first element is the line itself, second element is the double value
list.add(new Object[] {line, Double.valueOf(splitLine[1])});
}
} catch (IOException e) {
e.printStackTrace();
}
return list;
}
EDIT If you want reverse order:
Once you have your sorted list, you can reverse it using the convenient reverse method on the Collections class
Collections.reverse(list);

We can try the general approach of parsing the file into a sorted map (e.g. TreeMap), then iterating the map and writing back out to file.
TreeMap<String, Double> map = new TreeMap<String, Double>();
try (BufferedReader br = Files.newBufferedReader(Paths.get("yourfile.csv"))) {
String line;
while ((line = br.readLine()) != null) {
String[] parts = line.split(";");
map.put(parts[0], Double.parseDouble(parts[1]));
}
}
catch (IOException e) {
System.err.format("IOException: %s%n", e);
}
// now write the map to file, sorted ascending in alphabetical order
try (FileWriter writer = new FileWriter("yourfileout.csv");
BufferedWriter bw = new BufferedWriter(writer)) {
for (Map.Entry<String, Double> entry : map.entrySet()) {
bw.write(entry.getKey() + ";" + entry.getValue());
}
}
catch (IOException e) {
System.err.format("IOException: %s%n", e);
}
Notes:
I assume that the string values in the first column would always be unique. If there could be duplicates, the above script would have to be modified to use a map of lists, or something along those lines.
I also assume that the string values would all be lowercase. If not, then you might not get the sorting you expect. One solution, should this be a problem, would be to lowercase (or uppercase) every string before inserting that key into the map.

get comma separated values into the LinkedHashMap
TreeMap<String, Double> map = new LinkedHashMap<String, Double>();
try (BufferedReader br = Files.newBufferedReader(Paths.get("yourfile.csv"))) {
String line;
while ((line = br.readLine()) != null) {
String[] parts = line.split(";");
map.put(parts[0], Double.parseDouble(parts[1]));
}
}
catch (IOException e) {
System.err.format("IOException: %s%n", e);
}
then sort the map based on double values.
try with java 8,
LinkedHashMap<String, Double> sortedMap;
sortedMap = map.entrySet().stream().sorted(Entry.comparingByValue()).collect(Collectors.toMap(Entry::getKey, Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new));

Related

Create 2 HashMap within a loop to compare the values

I am reading a txt file of multiple pipe delimited records, each token of records corresponds to unique key and different value. I need to compare each record values (with same keys). To compare this I want to use HashMap to store first record then will iterate and store second record. After that will compare both hashmap to see if both contains similar values. But I am not sure how to manage or create 2 hashma within same loop while reading the txt file.
Example :
txt file as below
A|B|C|D|E
A|B|C|O|E
O|O|C|D|E
Each token of each record will be stored against unique key as below
first record
map.put(1, A);
map.put(2, B);
map.put(3, C);
map.put(4, D);
map.put(5, E);
Second record
map.put(1, A);
map.put(2, B);
map.put(3, C);
map.put(4, O);
map.put(5, E);
third record
map.put(1, O);
map.put(2, O);
map.put(3, C);
map.put(4, D);
map.put(5, E);
When I read each record in java using input stream, in same loop of reading records how can I create 2 hashmap of 2 different record to compare.
FileInputStream fstream;
fstream = new FileInputStream("C://PROJECTS//sampleInput.txt");
BufferedReader br = new BufferedReader(new InputStreamReader(fstream));
//Read File Line By Line
String line1=null;
while ((line1 = br.readLine()) != null){
Scanner scan = new Scanner(line1).useDelimiter("\\|");
while(scan.hasNext()){
// reading each token of the record
// how to create 2 hashmap of 2 records to compare.. or is there any simple way to compare each incoming records
}
printIt(counter);
}
Maybe you're thinking a little bit too complicated here by creating a new HashMap for each read line. I'd rather suggest to create an ArrayList<String> (or some other kind of 1-dimensional data structure) and store all read lines sequentially. After that you can iterate over this data structure and compare each value against each other by using string methods.
If you just want to count number of diffs you can use the following example:
int diffCount = 0;
String fileName = "test.psv";
try (CSVReader reader = new CSVReader(new FileReader(fileName), '|')) {
String[] prevLine = reader.readNext();
String[] nextLine;
while (prevLine != null && (nextLine = reader.readNext()) != null) {
if (!Arrays.equals(prevLine, nextLine)) {
diffCount++;
}
prevLine = nextLine;
}
logger.info("Diff count: {}", diffCount);
} catch (FileNotFoundException e) {
logger.error("enable to find file {}", fileName, e);
} catch (IOException e) {
logger.error("Got IO exception", e);
}
If for some reasons you really want to create two hashmap you can try to do this like this:
int diffCount = 0;
String fileName = "test.psv";
try (CSVReader reader = new CSVReader(new FileReader(fileName), '|')) {
Map<Integer, String> prevLine = readNextLine(reader);
Map<Integer, String> nextLine;
while (prevLine != null && (nextLine = readNextLine(reader)) != null) {
if (!prevLine.equals(nextLine)) {
diffCount++;
}
prevLine = nextLine;
}
logger.info("Diff count: {}", diffCount);
} catch (FileNotFoundException e) {
logger.error("enable to find file {}", fileName, e);
} catch (IOException e) {
logger.error("Got IO exception", e);
}
private Map<Integer, String> readNextLine(CSVReader reader) throws IOException {
String[] nextLine = reader.readNext();
return nextLine != null ? convert2map(nextLine) : null;
}
private Map<Integer, String> convert2map(String[] nextLine) {
return IntStream.range(0, nextLine.length)
.boxed()
.collect(toMap(identity(), (index) -> nextLine[index]));
}

Java how to remove duplicates from ArrayList

I have a CSV file which contains rules and ruleversions. The CSV file looks like this:
CSV FILE:
#RULENAME, RULEVERSION
RULE,01-02-01
RULE,01-02-02
RULE,01-02-34
OTHER_RULE,01-02-04
THIRDRULE, 01-02-04
THIRDRULE, 01-02-04
As you can see, 1 rule can have 1 or more rule versions. What I need to do is read this CSV file and put them in an array. I am currently doing that with the following script:
private static List<String[]> getRulesFromFile() {
String csvFile = "rulesets.csv";
BufferedReader br = null;
String line = "";
String delimiter = ",";
List<String[]> input = new ArrayList<String[]>();
try {
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
if (!line.startsWith("#")) {
String[] rulesetEntry = line.split(delimiter);
input.add(rulesetEntry);
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return input;
}
But I need to adapt the script so that it saves the information in the following format:
ARRAY (
=> RULE => 01-02-01, 01-02-02, 01-02-04
=> OTHER_RULE => 01-02-34
=> THIRDRULE => 01-02-01, 01-02-02
)
What is the best way to do this? Multidimensional array? And how do I make sure it doesn't save the rulename more than once?
You should use a different data structure, for example an HashMap, like this.
HashMap<String, List<String>> myMap = new HashMap<>();
try {
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
if (!line.startsWith("#")) {
String[] parts = string.split(delimiter);
String key = parts[0];
String value = parts[1];
if (myMap.containsKey(key)) {
myMap.get(key).add(value);
} else {
List<String> values = new ArrayList<String>();
values.add(value);
myMap.put(key, values);
}
}
}
This should work!
See using an ArrayList is not a good data structure of choice here.
I would personally suggest you to use a HashMap> for this particular purpose.
The rules will be your keys and rule versions will be your values which will be a list of strings.
While traversing your original file, just check if the rule (key) is present, then add the value to the list of rule versions (values) already present, otherwise add a new key and add the value to it.
For instance like this:
public List<String> removeDuplicates(List<String> myList) {
Hashtable<String, String> hashtable=new Hashtable<String, String>();
for(String s:myList) {
hashtable.put(s, s);
}
return new ArrayList<String>(hashtable.values());
}
This is exactly what key - value pairs can be used for. Just take a look at the Map Interface. There you can define a unique key containing various elements as value, perfectly for your issue.
Code:
// This collection will take String type as a Key
// and Prevent duplicates in its associated values
Map<String, HashSet<String>> map = new HashMap<String,HashSet<String>>();
// Check if collection contains the Key you are about to enter
// !REPLACE! -> "rule" with the Key you want to enter into your collection
// !REPLACE! -> "whatever" with the Value you want to associate with the key
if(!map.containsKey("rule")){
map.put("rule", new HashSet<String>());
}
else{
map.get("rule").add("whatever");
}
Reference:
Set
Map

How to compare each line of a text File? java

I have a text file with content like, with 792 lines:
der 17788648
und 14355959
die 10939606
Die 10480597
Now I want to compare if "Die" and "die" are equal in lowercase.
So if two Strings in lowerCase are equal, copy the word into a new text file in lowerCase and sum the values.
Expected output:
der 17788648
und 14355959
die 114420203
I have that so far:
try {
BufferedReader bk = null;
BufferedWriter bw = null;
bk = new BufferedReader(new FileReader("outagain.txt"));
bw = new BufferedWriter(new FileWriter("outagain5.txt"));
List<String> list = new ArrayList<>();
String s = "";
while (s != null) {
s = bk.readLine();
list.add(s);
}
for (int k = 0; k < 793; k++) {
String u = bk.readLine();
if (list.contains(u.toLowerCase())) {
//sum values?
} else {
bw.write(u + "\n");
}
}
System.out.println(list.size());
} catch (Exception e) {
System.out.println("Exception caught : " + e);
}
Instead of list.add(s);, use list.add(s.toLowerCase());. Right now your code is comparing lines of indeterminate case to lower-cased lines.
With Java 8, the best approach to standard problems like reading files, comparing, grouping, collecting is to use the streams api, since it is much more concise to do that in that way. At least when the files is only a few KB, then there will be no problems with that.
Something like:
Map<String, Integer> nameSumMap = Files.lines(Paths.get("test.txt"))
.map(x -> x.split(" "))
.collect(Collectors.groupingBy(x -> x[0].toLowerCase(),
Collectors.summingInt(x -> Integer.parseInt(x[1]))
));
First, you can read the file with Files.lines(), which returns a Stream<String>, than you can split the strings into a Stream<String[]>,
finally you can use the groupingBy() and summingInt() functions to group by the first element of the array and sum by the second one.
If you don't want to use the stream API, you can also create a HashMap und do your summing manually in the loop.
Use a HashMap to keep track of the unique fields. Before you do a put, do a get to see if the value is already there. If it is, sum the old value with the new one and put it in again (this replaces the old line having same key)
package com.foundations.framework.concurrency;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
public class FileSummarizer {
public static void main(String[] args) {
HashMap<String, Long> rows = new HashMap<String, Long>();
String line = "";
BufferedReader reader = null;
try {
reader = new BufferedReader(new FileReader("data.txt"));
while ((line = reader.readLine()) != null) {
String[] tokens = line.split(" ");
String key = tokens[0].toLowerCase();
Long current = Long.parseLong(tokens[1]);
Long previous = rows.get(key);
if(previous != null){
current += previous;
}
rows.put(key, current);
}
}
catch (IOException e) {
e.printStackTrace();
}
finally {
try {
reader.close();
Iterator<String> iterator = rows.keySet().iterator();
while (iterator.hasNext()) {
String key = iterator.next().toString();
String value = rows.get(key).toString();
System.out.println(key + " " + value);
}
}
catch (IOException e) {
e.printStackTrace();
}
}
}
}
The String class has an equalIgnoreCase method which you can use to compare two strings irrespective of case. so:
String var1 = "Die";
String var2 = "die";
System.out.println(var1.equalsIgnoreCase(var2));
Would print TRUE.
If I got your question right, you want to know how you can get the prefix from the file, compare it, get the value behind it and sum them up for each prefix. Is that about right?
You could use regular expressions to get the prefixes and values seperately. Then you can sum up all values with the same prefix and write them to the file for each one.
If you are not familiar with regular expressions, this links could help you:
Regex on tutorialpoint.com
Regex on vogella.com
For additional tutorials just scan google for "java regex" or similar tags.
If you do not want to differ between upper- and lowercase strings, just convert them all to lower/upper before comparing them as #spork explained already.

How to put the contents of a file to a String of array in java

I have a String array which contains some records ,now i have to put that records in a file and have to read those values and have to check the records with the String array values.Here is my String array..
public final static String fields[] = { "FileID", "FileName", "EventType",
"recordType", "accessPointNameNI", "apnSelectionMode",
"causeForRecClosing", "chChSelectionMode",
"chargingCharacteristics", "chargingID", "duration",
"dynamicAddressFlag", "iPBinV4AddressGgsn",
"datavolumeFBCDownlink", "datavolumeFBCUplink",
"qoSInformationNeg"};
I have to put these records in a map using these,,
static LinkedHashMap<String, String> getMetaData1() {
LinkedHashMap<String, String> md = new LinkedHashMap<>();
for (String fieldName : fields) md.put(fieldName, "");
return md;
}
now my file is
FileID
FileName
EventType
recordType
accessPointNameNI
apnSelectionMode
causeForRecClosing
chChSelectionMode
chargingCharacteristics
chargingID
duration
dynamicAddressFlag
iPBinV4AddressGgsn
datavolumeFBCDownlink
datavolumeFBCUplink
qoSInformationNeg
Now i am reading this file with this function
static LinkedHashMap<String, String> getMetaData() {
LinkedHashMap<String, String> md = new LinkedHashMap<>();
BufferedReader br = null;
try {
String sCurrentLine;
String file[];
br = new BufferedReader(new FileReader("./file/HuaGPRSConf"));
while ((sCurrentLine = br.readLine()) != null) {
md.put(sCurrentLine, "");
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (br != null)
br.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
return md;
}
Now those two functions are returing values in two different ways..The String is giving
{FileID=, FileName=, EventType=, recordType=, accessPointNameNI=, apnSelectionMode=, causeForRecClosing=, chChSelectionMode=, chargingCharacteristics=, chargingID=, duration=, dynamicAddressFlag=, iPBinV4AddressGgsn=, datavolumeFBCDownlink=, datavolumeFBCUplink=, qoSInformationNeg=}
But the file from which one i am getting the values is giving values with a big spaces..
{ FileID =, FileName=, EventType=, recordType=, accessPointNameNI=, apnSelectionMode=, causeForRecClosing=, chChSelectionMode=, chargingCharacteristics=, chargingID=, duration=, dynamicAddressFlag=, iPBinV4AddressGgsn=, datavolumeFBCDownlink=, datavolumeFBCUplink=, qoSInformationNeg=, rATType=, ratingGroup=, resultCode=, serviceConditionChange=, iPBinV4Address=, sgsnPLMNIdentifier=, timeOfFirstUsage=, timeOfLastUsage=, timeOfReport=, timeUsage=, changeCondition=, changeTime=,.... so on
now when i am trying to check two values using this function they are not equal..
LinkedHashMap<String, String> md1=getMetaData();
LinkedHashMap<String, String> md2=getMetaData1();
if(md1.equals(md2)){
System.out.println(md1);
}else{
System.out.println("Not");
}
i cannot understand the problem can anyone help...
You should use sCurrentLine.trim() to remove unnnecessary whitespace.
I will suggest to first check the data before comparing.If , you are finding extra space then first apply trim() to remove the space and then compare.
You are checking if 2 different instances of LinkedHashMap are equal and they are not.
You have to use get method of LinkedHashMap to compare values.
Also you should remove empty spaces by String trim method.

In Java, I want to split an array into smaller arrays, the length of which varys with inputted text files

So far, I have 2 arrays: one with stock codes and one with a list of file names. What I want to do is input the .txt files from each of the file names from the second array and then split this input into: 1. Arrays for each file 2. Arrays for each part with each file.
I have this:
ImportFiles f1 = new ImportFiles("File");
for (String file : FileArray.filearray) {
if (debug) {
System.out.println(file);
}
try {
String line;
String fileext = "C:\\ASCIIpdbSKJ\\"+file+".txt";
importstart = new BufferedReader(new FileReader(fileext));
for (line = importstart.readLine(); line != null; line = importstart.readLine()) {
importarray.add (line);
if (debug){
System.out.println(importarray.size());
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
importarray.add ("End")
This approach works to create a large array of all the files, will it be easier to change the input method to split it as it is coming in or split the large array I have?
At this point, the stock code array is irrelevant. Once I have split the arrays down I know where I will go from there.
Thanks.
Edit: I am aware that this code is incomplete in terms of { } but it is only printstreams and debugging missed off.
If you want to get a map with a filename and all its lines from all the files, here are relevant code parts:
Map<String, List<String>> fileLines = new HashMap<String, List<String>>();
for (String file : FileArray.filearray)
BufferedReader reader = new BufferedReader(new FileReader(fileext));
List<String> lines = new ArrayList<String>();
while ((line = reader.readLine()) != null){
lines.add(line);
}
fileLines.put(file, lines);
}

Categories

Resources