I have to generate execution plans for each query extracted from a log file. I have to generate a Hashmap that contains the query as the key and value as the execution plan. This HashMap is to be displayed as a table in Angular.
So far I have tried the following. I am getting the table with query and execution plans as two columns. But the explain plain is displayed in a single line, instead of the table form I get when I print it out on the console (See the picture). Is there a way to get the same format while displaying it in the Angular table.
The Console Prints it in this form, which is what I want in the table also.
CODE:
ExecutionPlanService: Connects to the Oracle DB and generates Explain Plan.
#Service
public class ExecutionPlanService {
List<String> l = new ArrayList<>();
String execute;
public List<String> getExecPlan(String line) throws IOException, SQLException, ClassNotFoundException{
ResultSet rs=null;
Class.forName("oracle.jdbc.driver.OracleDriver");
Connection con=DriverManager.getConnection("jdbc:oracle:thin:#10.49.7.212:1521:AVMPRD","CtmDitRW","CtmDit1743");
Statement stmt = con.createStatement();
stmt.execute(line);
rs = stmt.executeQuery("select plan_table_output from table(dbms_xplan.display())");
while (rs.next())
{
l.add(rs.getString(1));
}
return l;
}
public String prepend(String line, String prepend){
String output = null;
for (int index = 0; index < line.length(); index++) {
output = prepend + line;
}
return output;
}
}
ExecPlanService: Checks if the given line is SQL statement, generates explain plan and puts these both as key and value in a HashMap.
private SQLCheckerService checker = new SQLCheckerService();
private ExecutionPlanService service = new ExecutionPlanService();
List<String>l = new ArrayList<String>();
Map<String, List<String>> map = new HashMap<>();
String trimmedLine;
public Map<String, List<String>> processLine(String line) throws FileNotFoundException, UnsupportedEncodingException, ClassNotFoundException, IOException, SQLException {
if(this.checker.isSelectStatement(line))
{
System.out.println("working!!");
//Trim the line to start with a SQL keyword
trimmedLine= this.checker.trimString(line);
String prepend = this.service.prepend(trimmedLine, "Explain plan for ");
l =this.service.getExecPlan(prepend);
l.forEach(System.out::println);
map.put(trimmedLine,l );
System.out.println(map);
}
return map;
}
ReadFileExecPlanService: Reads a log file line by line and processes it to generate explain plan.
public Map<String, List<String>> readFile(String filename) throws IOException, ClassNotFoundException, SQLException{
private ExecPlanService eservice= new ExecPlanService();
Map<String, List<String>> hashMap = new HashMap<>();
try (BufferedReader br = new BufferedReader(new FileReader(filename))){
for(String line; (line = br.readLine()) != null; ){
hashMap = this.eservice.processLine(line);
}
}
System.out.println(hashMap);
return hashMap;
}
ExecPlanController
#RestController
public class ExecPlanController {
#Autowired
ReadFileExecPlan exservice;
Map<String, List<String>> map = new HashMap<>();
#GetMapping("/getPlan")
#ResponseBody
public Map<String, List<String>> getPlan() throws IOException, ClassNotFoundException, SQLException{
String rootLocation = "C:\\Users\\Apoorva_Sharma\\Desktop\\upload-dir";
File directory = new File(rootLocation);
//get all the files from a directory
File[] fList = directory.listFiles();
for (File file : fList){
if (file.isFile()){
map = (exservice.readFile(file.getAbsolutePath()));
}
}
System.out.println("WORKING!!");
System.out.println(map);
return map;
}
}
Related
I am trying to write a program that checks two files and prints the common contents from both the files.
Example of the file 1 content would be:
James 1
Cody 2
John 3
Example of the file 2 content would be:
1 Computer Science
2 Chemistry
3 Physics
So the final output printed on the console would be:
James Computer Science
Cody Chemistry
John Physics
Here is what I have so far in my code:
public class Filereader {
public static void main(String[] args) throws Exception {
File file = new File("file.txt");
File file2 = new File("file2.txt");
BufferedReader reader = new BufferedReader(new FileReader(file));
BufferedReader reader2 = new BufferedReader(new FileReader(file2));
String st, st2;
while ((st = reader.readLine()) != null) {
System.out.println(st);
}
while ((st2 = reader2.readLine()) != null) {
System.out.println(st2);
}
reader.close();
reader2.close();
}
}
I am having trouble in figuring out how to match the file contents, and print only the student name and their major by matching the student id in each of the file. Thanks for all the help.
You can use the other answers and make an object to every file, like tables in databases.
public class Person{
Long id;
String name;
//getters and setters
}
public class Course{
Long id;
String name;
//getters and setters
}
Them you have more control with your columns and it is simple to use.
Further you will use an ArrayList<Person> and an ArrayList<Course> and your relation can be a variable inside your objects like courseId in Person class or something else.
if(person.getcourseId() == course.getId()){
...
}
Them if the match is the first number of the files use person.getId() == course.getId().
Ps: Do not use split(" ") in your case, because you can have other objects with two values i.e 1 Computer Science.
What you want is to organize your text file data into map, then merge their data. This will work even if your data are mixed, not in order.
public class Filereader {
public static void main(String[] args) throws Exception {
File file = new File("file.txt");
File file2 = new File("file2.txt");
BufferedReader reader = new BufferedReader(new FileReader(file));
BufferedReader reader2 = new BufferedReader(new FileReader(file2));
String st, st2;
Map<Integer, String> nameMap = new LinkedHashMap<>();
Map<Integer, String> majorMap = new LinkedHashMap<>();
while ((st = reader.readLine()) != null) {
System.out.println(st);
String[] parts = st.split(" "); // Here you got ["James", "1"]
String name = parts[0];
Integer id = Integer.parseInt(parts[1]);
nameMap.put(id, name);
}
while ((st2 = reader2.readLine()) != null) {
System.out.println(st2);
String[] parts = st2.split(" ");
String name = parts[1];
Integer id = Integer.parseInt(parts[0]);
majorMap.put(id, name);
}
reader.close();
reader2.close();
// Combine and print
nameMap.keySet().stream().forEach(id -> {
System.out.println(nameMap.get(id) + " " + majorMap.get(id));
})
}
}
You should read these files at the same time in sequence. This is easy to accomplish with a single while statement.
while ((st = reader.readLine()) != null && (st2 = reader2.readLine()) != null) {
// print both st and st2
}
The way your code is written now, it reads one file at a time, printing data to the console from each individual file. If you want to meld the results together, you have to combine the output of the files in a single loop.
Given that the intention may also be that you have an odd-sized file in one batch but you do have numbers to correlate across, or the numbers may come in a nonsequential order, you may want to store these results into a data structure instead, like a List, since you know the specific index of each of these values and know where they should fit in.
Combining the NIO Files and Stream API, it's a little simpler:
public static void main(String[] args) throws Exception {
Map<String, List<String[]>> f1 = Files
.lines(Paths.get("file1"))
.map(line -> line.split(" "))
.collect(Collectors.groupingBy(arr -> arr[1]));
Map<String, List<String[]>> f2 = Files
.lines(Paths.get("file2"))
.map(line -> line.split(" "))
.collect(Collectors.groupingBy(arr -> arr[0]));
Stream.concat(f1.keySet().stream(), f2.keySet().stream())
.distinct()
.map(key -> f1.get(key).get(0)[0] + " " + f2.get(key).get(0)[1])
.forEach(System.out::println);
}
As can easily be noticed in the code, there are assumptions of valid data an of consistency between the two files. If this doesn't hold, you may need to first run a filter to exclude entries missing in either file:
Stream.concat(f1.keySet().stream(), f2.keySet().stream())
.filter(key -> f1.containsKey(key) && f2.containsKey(key))
.distinct()
...
If you change the order such that the number comes first in both files, you can read both files into a HashMap then create a Set of common keys. Then loop through the set of common keys and grab the associated value from each Hashmap to print:
My solution is verbose but I wrote it that way so that you can see exactly what's happening.
import java.util.Set;
import java.util.HashSet;
import java.util.Map;
import java.util.HashMap;
import java.io.File;
import java.util.Scanner;
class J {
public static Map<String, String> fileToMap(File file) throws Exception {
// TODO - Make sure the file exists before opening it
// Scans the input file
Scanner scanner = new Scanner(file);
// Create the map
Map<String, String> map = new HashMap<>();
String line;
String name;
String code;
String[] parts = new String[2];
// Scan line by line
while (scanner.hasNextLine()) {
// Get next line
line = scanner.nextLine();
// TODO - Make sure the string has at least 1 space
// Split line by index of first space found
parts = line.split(" ", line.indexOf(' ') - 1);
// Get the class code and string val
code = parts[0];
name = parts[1];
// Insert into map
map.put(code, name);
}
// Close input stream
scanner.close();
// Give the map back
return map;
}
public static Set<String> commonKeys(Map<String, String> nameMap,
Map<String, String> classMap) {
Set<String> commonSet = new HashSet<>();
// Get a set of keys for both maps
Set<String> nameSet = nameMap.keySet();
Set<String> classSet = classMap.keySet();
// Loop through one set
for (String key : nameSet) {
// Make sure the other set has it
if (classSet.contains(key)) {
commonSet.add(key);
}
}
return commonSet;
}
public static Map<String, String> joinByKey(Map<String, String> namesMap,
Map<String, String> classMap,
Set<String> commonKeys) {
Map<String, String> map = new HashMap<String, String>();
// Loop through common keys
for (String key : commonKeys) {
// TODO - check for nulls if get() returns nothing
// Fetch the associated value from each map
map.put(namesMap.get(key), classMap.get(key));
}
return map;
}
public static void main(String[] args) throws Exception {
// Surround in try catch
File names = new File("names.txt");
File classes = new File("classes.txt");
Map<String, String> nameMap = fileToMap(names);
Map<String, String> classMap = fileToMap(classes);
Set<String> commonKeys = commonKeys(nameMap, classMap);
Map<String, String> nameToClass = joinByKey(nameMap, classMap, commonKeys);
System.out.println(nameToClass);
}
}
names.txt
1 James
2 Cody
3 John
5 Max
classes.txt
1 Computer Science
2 Chemistry
3 Physics
4 Biology
Output:
{Cody=Chemistry, James=Computer, John=Physics}
Notes:
I added keys in classes.txt and names.txt that purposely did not match so you see that it does not come up in the output. That is because the key never makes it into the commonKeys set. So, they never get inserted into the joined map.
You can loop through the HashMap if you want my calling map.entrySet()
I've been trying to make a java program in which a tab delimited csv file is read line by line and the first column (which is a string) is added as a key to a hash map and the second column (integer) is it's value.
In the input file, there are duplicate keys but with different values so I was going to add the value to the existing key to form an ArrayList of values.
I can't figure out the best way of doing this and was wondering if anyone could help?
Thanks
EDIT: sorry guys, heres where i've got to with the code so far:
I should add the first column is the value and the second column is the key.
public class WordNet {
private final HashMap<String, ArrayList<Integer>> words;
private final static String LEXICAL_UNITS_FILE = "wordnet_data/wn_s.csv";
public WordNet() throws FileNotFoundException, IOException {
words = new HashMap<>();
readLexicalUnitsFile();
}
private void readLexicalUnitsFile() throws FileNotFoundException, IOException{
BufferedReader in = new BufferedReader(new FileReader(LEXICAL_UNITS_FILE));
String line;
while ((line = in.readLine()) != null) {
String columns[] = line.split("\t");
if (!words.containsKey(columns[1])) {
words.put(columns[1], new ArrayList<>());
}
}
in.close();
}
You are close
String columns[] = line.split("\t");
if (!words.containsKey(columns[1])) {
words.put(columns[1], new ArrayList<>());
}
should be
String columns[] = line.split("\t");
String key = columns[0]; // enhance readability of code below
List<Integer> list = words.get(key); // try to fetch the list
if (list == null) // check if the key is defined
{ // if not
list = new ArrayList<>(); // create a new list
words.put(key,list); // and add it to the map
}
list.add(new Integer(columns[1])); // in either case, add the value to the list
In response to the OP's comment/question
... the final line just adds the integer to the list but not to the hashmap, does something need to be added after that?
After the statement
List<Integer> list = words.get(key);
there are two possibilities. If list is non-null, then it is a reference to (not a copy of) the list that is already in the map.
If list is null, then we know the map does not contain the given key. In that case we create a new empty list, set the variable list as a reference to the newly created list, and then add the list to the map for the key.
In either case, when we reach
list.add(new Integer(columns[1]));
the variable list contains a reference to an ArrayList that is already in the map, either the one that was there before, or one we just creatd and added. We just add the value to it.
I should add the first column is the value and the second column is the key.
You could remplace the ArrayList declaration by a List declaration. But it is not very problematic.
Anyway, not tested but the logic should be such as :
while ((line = in.readLine()) != null) {
String columns[] = line.split("\t");
ArrayList<Integer> valueForCurrentLine = words.get(columns[1]);
// you instantiate and put the arrayList once
if (valueForCurrentLine==null){
valueForCurrentLine = new ArrayList<Integer>();
words.put(columns[1],valueForCurrentLine);
}
valueForCurrentLine.add(columns[0]);
Upvote to Jim Garrison's answer above. Here's a little more... (Yes, you should check/mark his answer as the one that solved it)
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class WordNet {
private final Map<String, List<Integer>> words;
private final static String LEXICAL_UNITS_FILE = "src/net/bwillard/practice/code/wn_s.csv";
/**
*
* #throws FileNotFoundException
* #throws IOException
*/
public WordNet() throws FileNotFoundException, IOException {
words = new HashMap<>();
readLexicalUnitsFile();
}
/**
*
* #throws FileNotFoundException
* #throws IOException
*/
private void readLexicalUnitsFile() throws FileNotFoundException, IOException {
BufferedReader in = new BufferedReader(new FileReader(LEXICAL_UNITS_FILE));
String line;
while ((line = in.readLine()) != null) {
String columns[] = line.split("\t");
String key = columns[0];
int valueInt;
List<Integer> valueList;
try {
valueInt = Integer.parseInt(columns[1]);
} catch (NumberFormatException e) {
System.out.println(e);
continue;
}
if (words.containsKey(key)) {
valueList = words.get(key);
} else {
valueList = new ArrayList<>();
words.put(key, valueList);
}
valueList.add(valueInt);
}
in.close();
}
//You can test this file by running it as a standalone app....
public static void main(String[] args) {
try {
WordNet wn = new WordNet();
for (String k : wn.words.keySet()) {
System.out.println(k + " " + wn.words.get(k));
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
The code given below takes two different inputs but I want to pass only single input that is the path of folder "test" and rest of the functioning as it is.
And also the final.tbl which is generation it should generate in the same input folder path:
public class Migrator {
private static final String KEY1 = "post_tran_id";
private static final String KEY2 = "post_tran_cust_id";
void migrate(String post_tran, String post_tran_cust) throws IOException {
Map<String, Map<String, String>> h1 = loadFile(post_tran, KEY1);
Map<String, Map<String, String>> h2 = loadFile(post_tran_cust, KEY2);
PrintStream out = new PrintStream("final.tbl");
for (Map.Entry<String, Map<String, String>> entry : h1.entrySet()) {
Map<String, String> data = entry.getValue();
String k = data.get(KEY2);
if (k != null && h2.containsKey(k)) {
print(out, KEY1, data.get(KEY1));
print(out, KEY2, data.get(KEY2));
// Print remaining rows in any order
for (String key : data.keySet()) {
if ( ! key.equals(KEY1) && ! key.equals(KEY2) ) {
print(out, key, data.get(key));
}
}
data = h2.get(k);
for (String key : data.keySet()) {
if ( ! key.equals(KEY2) ) {
print(out, key, data.get(key));
}
}
out.println(); // Record separator
}
}
}
private void print(PrintStream out, String key, String data) {
out.print("[name]");
out.print(key);
out.print("[/name]");
out.print("=");
out.print("[data]");
out.print(data);
out.print("[/data]");
out.println();
}
private Map<String, Map<String, String>> loadFile(String fileName, String key) throws
IOException {
Map<String, Map<String, String>> result = new HashMap<String, Map<String, String>>
();
BufferedReader br = new BufferedReader(new FileReader(fileName));
String line;
do {
Map<String, String> data = new HashMap<String, String>();
while ((line = br.readLine()) != null && !line.isEmpty()) {
data.put(getKey(line), getData(line));
}
result.put(data.get(key), data);
} while (line != null);
br.close();
return result;
}
private String getKey(String line) {
String[] tokens = line.split("=");
int length = tokens[0].length();
return tokens[0].substring(6, length - 7);
}
private String getData(String line) {
String[] tokens = line.split("=");
int length = tokens[1].length();
return tokens[1].substring(6, length - 7);
}
public static void main(String[] args) throws IOException { Migrator mg =
new Migrator();
mg.migrate("D:\\test\\post_tran.tbl",
"D:\\test\\post_tran_cust.tbl"); }
}
To make your migrate method take 1 argument but be able to work with many paths, you can always append all the paths into one string and parse them inside the migrate method.
Example:
String appendedArgument = "D:\\test\\post_tran.tbl;D:\\test\\post_tran_cust.tbl";
Notice the semi-colon separating both paths.
Then you can call you method:
mg.migrate(appendedArgument);
And parse it on the other side:
void migrate(String argument) throws IOException
{
String[] splitArgument.split(";");
String post_tran = splitArgument[0];
String post_tran_cust = splitArgument[1];
Map<String, Map<String, String>> h1 = loadFile(post_tran, KEY1);
Map<String, Map<String, String>> h2 = loadFile(post_tran_cust, KEY2);
}
Using this kind of method you can send as many paths into your migrate method as you want, this enables you (in this particular case) to also send the path where you want to store the final.tbl file.
That would make the appendedArgument string to look like:
String appendedArgument = "D:\\test\\;D:\\test\\post_tran.tbl;D:\\test\\post_tran_cust.tbl";
And then you would need to parse it accordingly inside the migrate method.
I am writing a utility method for parsing csv files. For some reason, this method is showing a null pointer exception during insertion into a list of maps. I am not sure why. Can someone run your eyes on this and explain why this could be happening? This is the point of the nullpointer exception:
record.put(header[i].toString(), nextLine[i].toString());
Here is the file to parse:
id;state;city;total_pop;avg_temp
1;Florida;Miami;120000;76
2;Michigan;Detroit;330000;54
3;New Jersey;Newark;190000;34
And the code:
public class FileParserUtil {
public List<Map<String, String>> parseFile(String fileName, char seperator)
throws IOException {
CSVReader reader = new CSVReader(new FileReader(fileName), seperator);
Map<String, String> record = null;
List<Map<String, String>> rows = null;
// int colcnt = reader.readNext().length;
String[] header = reader.readNext();
String[] nextLine;
while ((nextLine = reader.readNext()) != null) {
for (int i = 0; i< nextLine.length; i++){
System.out.println(header[0]);
System.out.println(nextLine[0]);
System.out.println(header[1]);
System.out.println(nextLine[1]);
System.out.println(nextLine.length);
record.put(header[i].toString(), nextLine[i].toString());
}
rows.add(record);
}
reader.close();
return rows;
}
}
You variable header[i] probably does not have the same length of nextLine[i], so you cannot use the same index i to retrieve its elements.
Edit:
I think you forgot to initialize Map<String, String> record. Is that it?
Ive been working on this code for quite sometime and just want to be given the simple heads up if im routing down a dead end. The point where im at now is to mathch identical cells from diffrent .csv files and copy one row into another csv file. The question really is would it be possible to write at specfic lines say for example if the the 2 cells match at row 50 i wish to write back on to row 50. Im assuming that i would maybe extract everything to a hashmap, write it in there then write back to the .csv file? is there a easier way?
for example i have one Csv that has person details, and the other has property details of where the actual person lives, i wish to copy the property details to the person csv, aswell as match them up with the correct person detail. hope this makes sense
public class Old {
public static void main(String [] args) throws IOException
{
List<String[]> cols;
List<String[]> cols1;
int row =0;
int count= 0;
boolean b;
CsvMapReader Reader = new CsvMapReader(new FileReader("file1.csv"), CsvPreference.EXCEL_PREFERENCE);
CsvMapReader Reader2 = new CsvMapReader(new FileReader("file2.csv"), CsvPreference.EXCEL_PREFERENCE);
try {
cols = readFile("file1.csv");
cols1 = readFile("fiel2.csv");
String [] headers = Reader.getCSVHeader(true);
headers = header(cols1,headers
} catch (IOException e) {
e.printStackTrace();
return;
}
for (int j =1; j<cols.size();j++) //1
{
for (int i=1;i<cols1.size();i++){
if (cols.get(j)[0].equals(cols1.get(i)[0]))
{
}
}
}
}
private static List<String[]> readFile(String fileName) throws IOException
{
List<String[]> values = new ArrayList<String[]>();
Scanner s = new Scanner(new File(fileName));
while (s.hasNextLine()) {
String line = s.nextLine();
values.add(line.split(","));
}
return values;
}
public static void csvWriter (String fileName, String [] nameMapping ) throws FileNotFoundException
{
ICsvListWriter writer = new CsvListWriter(new PrintWriter(fileName),CsvPreference.STANDARD_PREFERENCE);
try {
writer.writeHeader(nameMapping);
} catch (IOException e) {
e.printStackTrace();
}
}
public static String[] header(List<String[]> cols1, String[] headers){
List<String> list = new ArrayList<String>();
String [] add;
int count= 0;
for (int i=0;i<headers.length;i++){
list.add(headers[i]);
}
boolean c;
c= true;
while(c) {
add = cols1.get(0);
list.add(add[count]);
if (cols1.get(0)[count].equals(null))// this line is never read errpr
{
c=false;
break;
} else
count ++;
}
String[] array = new String[list.size()];
list.toArray(array);
return array;
}
Just be careful if you read all of the addresses and person details into memory first (as Thomas has suggested) - if you're only dealing with small CSV files then it's fine, but you may run out of memory if you're dealing with larger files.
As an alternative, I've put together an example that reads the addresses in first, then writes the combined person/address details while it reads in the person details.
Just a few things to note:
I've used CsvMapReader and CsvMapWriter because you were - this meant I've had to use a Map containing a Map for storing the addresses. Using CsvBeanReader/CsvBeanWriter would make this a bit more elegant.
The code from your question doesn't actually use Super CSV to read the CSV (you're using Scanner and String.split()). You'll run into issues if your CSV contains commas in the data (which is quite possible with addresses), so it's a lot safer to use Super CSV, which will handle escaped commas for you.
Example:
package example;
import java.io.StringReader;
import java.io.StringWriter;
import java.util.HashMap;
import java.util.Map;
import org.supercsv.io.CsvMapReader;
import org.supercsv.io.CsvMapWriter;
import org.supercsv.io.ICsvMapReader;
import org.supercsv.io.ICsvMapWriter;
import org.supercsv.prefs.CsvPreference;
public class CombiningPersonAndAddress {
private static final String PERSON_CSV = "id,firstName,lastName\n"
+ "1,philip,fry\n2,amy,wong\n3,hubert,farnsworth";
private static final String ADDRESS_CSV = "personId,address,country\n"
+ "1,address 1,USA\n2,address 2,UK\n3,address 3,AUS";
private static final String[] COMBINED_HEADER = new String[] { "id",
"firstName", "lastName", "address", "country" };
public static void main(String[] args) throws Exception {
ICsvMapReader personReader = null;
ICsvMapReader addressReader = null;
ICsvMapWriter combinedWriter = null;
final StringWriter output = new StringWriter();
try {
// set up the readers/writer
personReader = new CsvMapReader(new StringReader(PERSON_CSV),
CsvPreference.STANDARD_PREFERENCE);
addressReader = new CsvMapReader(new StringReader(ADDRESS_CSV),
CsvPreference.STANDARD_PREFERENCE);
combinedWriter = new CsvMapWriter(output,
CsvPreference.STANDARD_PREFERENCE);
// map of personId -> address (inner map is address details)
final Map<String, Map<String, String>> addresses =
new HashMap<String, Map<String, String>>();
// read in all of the addresses
Map<String, String> address;
final String[] addressHeader = addressReader.getCSVHeader(true);
while ((address = addressReader.read(addressHeader)) != null) {
final String personId = address.get("personId");
addresses.put(personId, address);
}
// write the header
combinedWriter.writeHeader(COMBINED_HEADER);
// read each person
Map<String, String> person;
final String[] personHeader = personReader.getCSVHeader(true);
while ((person = personReader.read(personHeader)) != null) {
// copy address details to person if they exist
final String personId = person.get("id");
final Map<String, String> personAddress = addresses.get(personId);
if (personAddress != null) {
person.putAll(personAddress);
}
// write the combined details
combinedWriter.write(person, COMBINED_HEADER);
}
} finally {
personReader.close();
addressReader.close();
combinedWriter.close();
}
// print the output
System.out.println(output);
}
}
Output:
id,firstName,lastName,address,country
1,philip,fry,address 1,USA
2,amy,wong,address 2,UK
3,hubert,farnsworth,address 3,AUS
From your comment, it seems like you have the following situation:
File 1 contains persons
File 2 contains addresses
You then want to match persons and addresses by some key ( one or more fields) and write the combination back to a CSV file.
Thus the simplest approach might be something like this:
//use a LinkedHashMap to preserve the order of the persons as found in file 1
Map<PersonKey, String[]> persons = new LinkedHashMap<>();
//fill in the persons from file 1 here
Map<PersonKey, String[]> addresses = new HashMap<>();
//fill in the addresses from file 2 here
List<String[]> outputLines = new ArrayList<>(persons.size());
for( Map.Entry<PersonKey, String[]> personEntry: persons.entrySet() ) {
String[] person = personEntry.getValue();
String[] address = addresses.get( personEntry.getKey() );
//merge the two arrays and put them into outputLines
}
//write outputLines to a file
Note that PersonKey might just be a String or a wrapper object ( Integer etc.) if you can match persons and addresses by one field. If you have more fields you might need a custom PersonKey object with equals() and hashCode() properly overridden.