I am reading a txt file of multiple pipe delimited records, each token of records corresponds to unique key and different value. I need to compare each record values (with same keys). To compare this I want to use HashMap to store first record then will iterate and store second record. After that will compare both hashmap to see if both contains similar values. But I am not sure how to manage or create 2 hashma within same loop while reading the txt file.
Example :
txt file as below
A|B|C|D|E
A|B|C|O|E
O|O|C|D|E
Each token of each record will be stored against unique key as below
first record
map.put(1, A);
map.put(2, B);
map.put(3, C);
map.put(4, D);
map.put(5, E);
Second record
map.put(1, A);
map.put(2, B);
map.put(3, C);
map.put(4, O);
map.put(5, E);
third record
map.put(1, O);
map.put(2, O);
map.put(3, C);
map.put(4, D);
map.put(5, E);
When I read each record in java using input stream, in same loop of reading records how can I create 2 hashmap of 2 different record to compare.
FileInputStream fstream;
fstream = new FileInputStream("C://PROJECTS//sampleInput.txt");
BufferedReader br = new BufferedReader(new InputStreamReader(fstream));
//Read File Line By Line
String line1=null;
while ((line1 = br.readLine()) != null){
Scanner scan = new Scanner(line1).useDelimiter("\\|");
while(scan.hasNext()){
// reading each token of the record
// how to create 2 hashmap of 2 records to compare.. or is there any simple way to compare each incoming records
}
printIt(counter);
}
Maybe you're thinking a little bit too complicated here by creating a new HashMap for each read line. I'd rather suggest to create an ArrayList<String> (or some other kind of 1-dimensional data structure) and store all read lines sequentially. After that you can iterate over this data structure and compare each value against each other by using string methods.
If you just want to count number of diffs you can use the following example:
int diffCount = 0;
String fileName = "test.psv";
try (CSVReader reader = new CSVReader(new FileReader(fileName), '|')) {
String[] prevLine = reader.readNext();
String[] nextLine;
while (prevLine != null && (nextLine = reader.readNext()) != null) {
if (!Arrays.equals(prevLine, nextLine)) {
diffCount++;
}
prevLine = nextLine;
}
logger.info("Diff count: {}", diffCount);
} catch (FileNotFoundException e) {
logger.error("enable to find file {}", fileName, e);
} catch (IOException e) {
logger.error("Got IO exception", e);
}
If for some reasons you really want to create two hashmap you can try to do this like this:
int diffCount = 0;
String fileName = "test.psv";
try (CSVReader reader = new CSVReader(new FileReader(fileName), '|')) {
Map<Integer, String> prevLine = readNextLine(reader);
Map<Integer, String> nextLine;
while (prevLine != null && (nextLine = readNextLine(reader)) != null) {
if (!prevLine.equals(nextLine)) {
diffCount++;
}
prevLine = nextLine;
}
logger.info("Diff count: {}", diffCount);
} catch (FileNotFoundException e) {
logger.error("enable to find file {}", fileName, e);
} catch (IOException e) {
logger.error("Got IO exception", e);
}
private Map<Integer, String> readNextLine(CSVReader reader) throws IOException {
String[] nextLine = reader.readNext();
return nextLine != null ? convert2map(nextLine) : null;
}
private Map<Integer, String> convert2map(String[] nextLine) {
return IntStream.range(0, nextLine.length)
.boxed()
.collect(toMap(identity(), (index) -> nextLine[index]));
}
Related
i'm stuck on this part. The aim is to take the values from an file.ini with this format
X = Y
X1 = Y1
X2 = Y2
take the Y values and replace them in a scxml file instead of the corresponding X keys, and save the new file.scxml
As you can see from my pasted code, i use the HashMap to take the key and values printed correctly, that although it seems right the code to replace the values works only for the first entry of the HashMap.
The code is currently as follows:
public String getPropValues() throws IOException {
try {
Properties prop = new Properties();
String pathconf = this.pathconf;
String pathxml = this.pathxml;
//Read file conf
File inputFile = new File(pathconf);
InputStream is = new FileInputStream(inputFile);
BufferedReader br = new BufferedReader(new InputStreamReader(is));
//load the buffered file
prop.load(br);
String name = prop.getProperty("name");
//Read xml file to get the format
FileReader reader = new FileReader(pathxml);
String newString;
StringBuffer str = new StringBuffer();
String lineSeparator = System.getProperty("line.separator");
BufferedReader rb = new BufferedReader(reader);
//read file.ini to HashMap
Map<String, String> mapFromFile = getHashMapFromFile();
//iterate over HashMap entries
for(Map.Entry<String, String> entry : mapFromFile.entrySet()){
System.out.println( entry.getKey() + " -> " + entry.getValue() );
//replace values
while ((newString = rb.readLine()) != null){
str.append(lineSeparator);
str.append(newString.replaceAll(entry.getKey(), entry.getValue()));
}
}
rb.close();
String pathwriter = pathxml + name + ".scxml";
BufferedWriter bw = new BufferedWriter(new FileWriter(new File(pathwriter)));
bw.write(str.toString());
//flush the stream
bw.flush();
//close the stream
bw.close();
} catch (Exception e) {
System.out.println("Exception: " + e);
}
return result;
}
so my .ini file is for example
Apple = red
Lemon = yellow
it print key and values correctly:
Apple -> red
Lemon -> yellow
but replace in the file only Apple with red and not the others key
The problem lays in your control flow order.
By the time the first iteration in your for loop, which corresponds to the first entry Apple -> red, runs it would caused the BufferedReader rb to reach the end of stream, hence doing nothing for subsequent iterations.
You have then either to reinitialize the BufferedReader for each iteration, or better, inverse the looping over your Map entries to be within the BufferedReader read loop:
EDIT (following #David hints)
You should can assign the resulting replaced value to the line replacement that will be appended to the result file at each line iteration:
public String getPropValues() throws IOException {
try {
// ...
BufferedReader rb = new BufferedReader(reader);
//read file.ini to HashMap
Map<String, String> mapFromFile = getHashMapFromFile();
//replace values
while ((newString = rb.readLine()) != null) {
// iterate over HashMap entries
for (Map.Entry<String, String> entry : mapFromFile.entrySet()) {
newString = newString.replace(entry.getKey(), entry.getValue());
}
str.append(lineSeparator)
.append(newString);
}
rb.close();
// ...
} catch (Exception e) {
System.out.println("Exception: " + e);
}
return result;
}
I have a csv file formed by two attributes: the first of type string and the second of type double.
Starting from this csv file, I would like to obtain another one, however, increasingly ordered based on the value of the second attribute. In SQL there was the ORDER BY function that allowed to order a database based on the specified attribute, I would like to get the same result as the ORDER BY.
Example input CSV file:
tricolor;14.0
career;9.0
salty;1020.0
looks;208.0
bought;110.0
Expected output CSV file:
career;9.0
tricolor;14.0
bought;110.0
looks;208.0
salty;1020.0
Read the CSV file into an List of Object[] (one Object[] per line in your CSV file)
First element of the array is the line itself (a String)
Second element of the array is the value of the double (a Double)
so you have the following list:
{
["tricolor;14.0", 14.0],
["career;9.0", 9.0],
["salty;1020.0", 1020.0],
["looks;208.0", 208.0],
["bought;110.0", 110.0]
}
Then sort it based on the value of the double
And you can then write it back to a CSV file (only writing the first element of each array)
List<Object[]> list = readFile("myFile.csv");
list.sort(Comparator.comparing(p -> (Double)p[1]));
// write to csv file, just printing it out here
list.forEach(p -> System.out.println(p[0]));
The method to read the file:
private static List<Object[]> readFile(String fileName) {
List<Object[]> list = new ArrayList<>();
try (BufferedReader br = new BufferedReader(new FileReader(fileName))) {
String line;
String[] splitLine;
while ((line = br.readLine()) != null) {
splitLine = line.split(";");
// add an array, first element is the line itself, second element is the double value
list.add(new Object[] {line, Double.valueOf(splitLine[1])});
}
} catch (IOException e) {
e.printStackTrace();
}
return list;
}
EDIT If you want reverse order:
Once you have your sorted list, you can reverse it using the convenient reverse method on the Collections class
Collections.reverse(list);
We can try the general approach of parsing the file into a sorted map (e.g. TreeMap), then iterating the map and writing back out to file.
TreeMap<String, Double> map = new TreeMap<String, Double>();
try (BufferedReader br = Files.newBufferedReader(Paths.get("yourfile.csv"))) {
String line;
while ((line = br.readLine()) != null) {
String[] parts = line.split(";");
map.put(parts[0], Double.parseDouble(parts[1]));
}
}
catch (IOException e) {
System.err.format("IOException: %s%n", e);
}
// now write the map to file, sorted ascending in alphabetical order
try (FileWriter writer = new FileWriter("yourfileout.csv");
BufferedWriter bw = new BufferedWriter(writer)) {
for (Map.Entry<String, Double> entry : map.entrySet()) {
bw.write(entry.getKey() + ";" + entry.getValue());
}
}
catch (IOException e) {
System.err.format("IOException: %s%n", e);
}
Notes:
I assume that the string values in the first column would always be unique. If there could be duplicates, the above script would have to be modified to use a map of lists, or something along those lines.
I also assume that the string values would all be lowercase. If not, then you might not get the sorting you expect. One solution, should this be a problem, would be to lowercase (or uppercase) every string before inserting that key into the map.
get comma separated values into the LinkedHashMap
TreeMap<String, Double> map = new LinkedHashMap<String, Double>();
try (BufferedReader br = Files.newBufferedReader(Paths.get("yourfile.csv"))) {
String line;
while ((line = br.readLine()) != null) {
String[] parts = line.split(";");
map.put(parts[0], Double.parseDouble(parts[1]));
}
}
catch (IOException e) {
System.err.format("IOException: %s%n", e);
}
then sort the map based on double values.
try with java 8,
LinkedHashMap<String, Double> sortedMap;
sortedMap = map.entrySet().stream().sorted(Entry.comparingByValue()).collect(Collectors.toMap(Entry::getKey, Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new));
I have a CSV file which contains rules and ruleversions. The CSV file looks like this:
CSV FILE:
#RULENAME, RULEVERSION
RULE,01-02-01
RULE,01-02-02
RULE,01-02-34
OTHER_RULE,01-02-04
THIRDRULE, 01-02-04
THIRDRULE, 01-02-04
As you can see, 1 rule can have 1 or more rule versions. What I need to do is read this CSV file and put them in an array. I am currently doing that with the following script:
private static List<String[]> getRulesFromFile() {
String csvFile = "rulesets.csv";
BufferedReader br = null;
String line = "";
String delimiter = ",";
List<String[]> input = new ArrayList<String[]>();
try {
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
if (!line.startsWith("#")) {
String[] rulesetEntry = line.split(delimiter);
input.add(rulesetEntry);
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return input;
}
But I need to adapt the script so that it saves the information in the following format:
ARRAY (
=> RULE => 01-02-01, 01-02-02, 01-02-04
=> OTHER_RULE => 01-02-34
=> THIRDRULE => 01-02-01, 01-02-02
)
What is the best way to do this? Multidimensional array? And how do I make sure it doesn't save the rulename more than once?
You should use a different data structure, for example an HashMap, like this.
HashMap<String, List<String>> myMap = new HashMap<>();
try {
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
if (!line.startsWith("#")) {
String[] parts = string.split(delimiter);
String key = parts[0];
String value = parts[1];
if (myMap.containsKey(key)) {
myMap.get(key).add(value);
} else {
List<String> values = new ArrayList<String>();
values.add(value);
myMap.put(key, values);
}
}
}
This should work!
See using an ArrayList is not a good data structure of choice here.
I would personally suggest you to use a HashMap> for this particular purpose.
The rules will be your keys and rule versions will be your values which will be a list of strings.
While traversing your original file, just check if the rule (key) is present, then add the value to the list of rule versions (values) already present, otherwise add a new key and add the value to it.
For instance like this:
public List<String> removeDuplicates(List<String> myList) {
Hashtable<String, String> hashtable=new Hashtable<String, String>();
for(String s:myList) {
hashtable.put(s, s);
}
return new ArrayList<String>(hashtable.values());
}
This is exactly what key - value pairs can be used for. Just take a look at the Map Interface. There you can define a unique key containing various elements as value, perfectly for your issue.
Code:
// This collection will take String type as a Key
// and Prevent duplicates in its associated values
Map<String, HashSet<String>> map = new HashMap<String,HashSet<String>>();
// Check if collection contains the Key you are about to enter
// !REPLACE! -> "rule" with the Key you want to enter into your collection
// !REPLACE! -> "whatever" with the Value you want to associate with the key
if(!map.containsKey("rule")){
map.put("rule", new HashSet<String>());
}
else{
map.get("rule").add("whatever");
}
Reference:
Set
Map
I have a String array which contains some records ,now i have to put that records in a file and have to read those values and have to check the records with the String array values.Here is my String array..
public final static String fields[] = { "FileID", "FileName", "EventType",
"recordType", "accessPointNameNI", "apnSelectionMode",
"causeForRecClosing", "chChSelectionMode",
"chargingCharacteristics", "chargingID", "duration",
"dynamicAddressFlag", "iPBinV4AddressGgsn",
"datavolumeFBCDownlink", "datavolumeFBCUplink",
"qoSInformationNeg"};
I have to put these records in a map using these,,
static LinkedHashMap<String, String> getMetaData1() {
LinkedHashMap<String, String> md = new LinkedHashMap<>();
for (String fieldName : fields) md.put(fieldName, "");
return md;
}
now my file is
FileID
FileName
EventType
recordType
accessPointNameNI
apnSelectionMode
causeForRecClosing
chChSelectionMode
chargingCharacteristics
chargingID
duration
dynamicAddressFlag
iPBinV4AddressGgsn
datavolumeFBCDownlink
datavolumeFBCUplink
qoSInformationNeg
Now i am reading this file with this function
static LinkedHashMap<String, String> getMetaData() {
LinkedHashMap<String, String> md = new LinkedHashMap<>();
BufferedReader br = null;
try {
String sCurrentLine;
String file[];
br = new BufferedReader(new FileReader("./file/HuaGPRSConf"));
while ((sCurrentLine = br.readLine()) != null) {
md.put(sCurrentLine, "");
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (br != null)
br.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
return md;
}
Now those two functions are returing values in two different ways..The String is giving
{FileID=, FileName=, EventType=, recordType=, accessPointNameNI=, apnSelectionMode=, causeForRecClosing=, chChSelectionMode=, chargingCharacteristics=, chargingID=, duration=, dynamicAddressFlag=, iPBinV4AddressGgsn=, datavolumeFBCDownlink=, datavolumeFBCUplink=, qoSInformationNeg=}
But the file from which one i am getting the values is giving values with a big spaces..
{ FileID =, FileName=, EventType=, recordType=, accessPointNameNI=, apnSelectionMode=, causeForRecClosing=, chChSelectionMode=, chargingCharacteristics=, chargingID=, duration=, dynamicAddressFlag=, iPBinV4AddressGgsn=, datavolumeFBCDownlink=, datavolumeFBCUplink=, qoSInformationNeg=, rATType=, ratingGroup=, resultCode=, serviceConditionChange=, iPBinV4Address=, sgsnPLMNIdentifier=, timeOfFirstUsage=, timeOfLastUsage=, timeOfReport=, timeUsage=, changeCondition=, changeTime=,.... so on
now when i am trying to check two values using this function they are not equal..
LinkedHashMap<String, String> md1=getMetaData();
LinkedHashMap<String, String> md2=getMetaData1();
if(md1.equals(md2)){
System.out.println(md1);
}else{
System.out.println("Not");
}
i cannot understand the problem can anyone help...
You should use sCurrentLine.trim() to remove unnnecessary whitespace.
I will suggest to first check the data before comparing.If , you are finding extra space then first apply trim() to remove the space and then compare.
You are checking if 2 different instances of LinkedHashMap are equal and they are not.
You have to use get method of LinkedHashMap to compare values.
Also you should remove empty spaces by String trim method.
So far, I have 2 arrays: one with stock codes and one with a list of file names. What I want to do is input the .txt files from each of the file names from the second array and then split this input into: 1. Arrays for each file 2. Arrays for each part with each file.
I have this:
ImportFiles f1 = new ImportFiles("File");
for (String file : FileArray.filearray) {
if (debug) {
System.out.println(file);
}
try {
String line;
String fileext = "C:\\ASCIIpdbSKJ\\"+file+".txt";
importstart = new BufferedReader(new FileReader(fileext));
for (line = importstart.readLine(); line != null; line = importstart.readLine()) {
importarray.add (line);
if (debug){
System.out.println(importarray.size());
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
importarray.add ("End")
This approach works to create a large array of all the files, will it be easier to change the input method to split it as it is coming in or split the large array I have?
At this point, the stock code array is irrelevant. Once I have split the arrays down I know where I will go from there.
Thanks.
Edit: I am aware that this code is incomplete in terms of { } but it is only printstreams and debugging missed off.
If you want to get a map with a filename and all its lines from all the files, here are relevant code parts:
Map<String, List<String>> fileLines = new HashMap<String, List<String>>();
for (String file : FileArray.filearray)
BufferedReader reader = new BufferedReader(new FileReader(fileext));
List<String> lines = new ArrayList<String>();
while ((line = reader.readLine()) != null){
lines.add(line);
}
fileLines.put(file, lines);
}