Load data dynamically into HashMap - java

I am reading data from a CSV file and want to store it into hashMap. There are 3 columns, all of them Strings. I am using this code:
listDataHeader = new ArrayList<String>();
listDataChild = new HashMap<String,List<String>>();
InputStream inputStream = getResources().openRawResource(R.raw.photographers);
CSVReader csv = new CSVReader(inputStream);
List<String[]> data = csv.read();
List<String> info = new ArrayList<String>();
for(String[] children : data){
info.add(children[1]);
info.add(children[2]);
}
//fill data for the child
for (String[] line : data) {
listDataHeader.add(line[0]);
listDataChild.put(listDataHeader.get(0),info);
}
The listdataChild should keep the info of the 2nd and 3rd column in the CSV. Currently, it's loading the that info more than once. I'd welcome any ideas, thank you!

You could do this with just one iteration.
for (String[] line : data) {
List<String> info = new ArrayList<String>();
info.add(line[1]);
info.add(line[2]);
listDataChild.put(line[0], info);
}
Here, you are iterating through the rows of your csv and creating a new list and adding your second and third columns to that list and adding that list to the map.

Related

How to parse the csv to hashmap grouping by column using opencsv

I am trying to use hashmap the col1 values to col2 values from a csv file using CSVREADER. But I am unable to find a logic to do so.
I want to do it through reading the CSV through CSVReader, looping the datalines and using arraylist and hashmap key and value(arraylist). I dont want to hardcode it..
I did something till the following. Unable to proceed further. Please help..
CSVReader csvReader = new CSVReader(new FileReader(fileName),',','"',1);
Map<String, List<String>> tableandcols = new HashMap<String, List<String>>();
ArrayList<String> tablenames = new ArrayList<>();
ArrayList<String> colnames = new ArrayList<>();
while((row = csvReader.readNext()) != null) {
tablenames.add(row[0]);
colnames.add(row[1]);
}
input data:
State,City,Country
NJ,Trenton,US
NJ,Newark,US
NC,Cary,US
NC,Charlotte,US
GA,Atlanta,US
I want the data to be in hashmap as following
[<NJ,[Trenton,Newark]>
<NC,[Cary,Charlotte]>
<GA,[Atlanta]>]
You can try below piece of code :
try
{
CSVReader csvReader = new CSVReader(new FileReader(fileName),',','"',1);
Map<String, List<String>> tableandcols = new HashMap<String, List<String>>();
while((row = csvReader.readNext()) != null)
{
// If map contains state already, add the city to the values list
if(tableandcols.containsKey(row[0]))
{
tableandcols.get(row[0]).add(row[1);
}
// if map doesn't have this state as key, insert a key and value
else {
List<String> cities = new ArrayList<>();
cities.add(row[1]);
tableandcols.put(row[0], cities);
}
}
}
catch(Exception e){
// log exception
}
Alternatively, you can also use HeaderColumnNameTranslateMappingStrategy to map column values to java bean. Loop through the java beans list and aggregate cities based on state.
You can simply do it by using java-8 stream approach, use readAll to read complete file in List<String[]>
Reads the entire file into a List with each element being a String[] of tokens. Since the current implementation returns a LinkedList, you are strongly discouraged from using index-based access methods to get at items in the list. Instead, iterate over the list.
If you want to skip first row with headers then use skip(1), and then use Collectors.groupingBy to group the elements based on State
Map<String, List<String>> res = arr.stream().skip(1)
.collect(Collectors.groupingBy(str -> str[0], Collectors.mapping(str -> str[1], Collectors.toList())));
Or simple for loop using map.compute
List<String[]> arr = csvReader.readAll();
Map<String, List<String>> tableandcols = new HashMap<String, List<String>>();
for(String[] array : arr) {
tableandcols.compute(array[0], (key,val)->val==null ? new ArrayList<>() : val).add(array[1]);
}

How can a List of INDArrays be stored in a file

I am working on an reinforcement-learning project and have a List<INDArray> which holds a list of states of the world and a second List<INDArray>which holds action-prediction and reward values with the index corresponding to the states of the first List
I want to store these data for later training on the hard-drive, how can I achieve this?
Lets sax for example we have:
List<INDArray> stateList = new ArrayList<>();
stateList.add(Nd4j.valueArrayOf(new int[]{3,3,3}, 5));
stateList.add(Nd4j.valueArrayOf(new int[]{3,3,3}, 6));
List<INDArray> valueList = new ArrayList<>();
valueList.add(Nd4j.create(new float[]{1, 2}));
valueList.add(Nd4j.create(new float[]{3, 4}));
you have to preparefile content and then simply write into file.
String fileContent = "";
for (INDArray arr : valueList) {
str +=arr.getValue()+"/n";//arr.getValue() anything which u want to add
}
FileWriter fileWriter = new FileWriter("c:/temp/samplefile.txt");
fileWriter.write(fileContent);
fileWriter.close();

How to read a particular row in CSV file

I have an arraylist that I want to verify with the rows of a csv file.
Everytime the arraylist has different elements as I am creating that arraylist dynamically and adding element text from my webpage. But I wanna access the csv data horizontally i.e. everytime I want to veryfy the row data with my arraylist. Please give me a solution for this.
You can try doing something like this:
try (BufferedReader br = new BufferedReader (new FileReader (path))){
Stream<String> lines = br.lines();
String[] linesArray = lines.toArray();
if (index < linesArray.length) {
String line = linesArray[index];
//then you can do your verification
}
}

How to add List into properties file?

I am converting properties file into xml format like below .
public class XmlPropertiesWriter {
public static void main(String args[]) throws FileNotFoundException, IOException {
//Reading properties files in Java example
Properties props = new Properties();
FileOutputStream fos = new FileOutputStream("C:\\Users\\Desktop\\myxml.xml");
props.setProperty("key1", "test");
props.setProperty("key2", "test1");
//writing properites into properties file from Java
props.storeToXML(fos, "Properties file in xml format generated from Java program");
fos.close();
}
}
This is working fine.But I want to add one ArrayList into this xml file,How can I do this,Any one help me.
You can (un)serialized the list into string representation to store the data into the properties file:
ArrayList<String> list = new ArrayList<>( );
String serialized = list.stream( ).collect( Collectors.joining( "," ) );
String input = "data,data"
List<String> unserialized = Arrays.asList( input.split( "," ) );
With this method, take care to use a seperator which is never contained in your data.
Otherwise, write a xml (or json) file reader/writer to do what you want with support of list element
Depends on what type the ArrayList is. If it's a String type you can do
arrayList.toArray(new String[arrayList.size()]);
If the type is an object you can create a StringBuilder and add all the values seperated by a ; or : so you can split when needed
final StringBuilder builder = new Stringbuilder();
final List<Point> list = new ArrayList<Point>();
list.add(new Point(0, 0));
list.add(new Point(1, 0));
for(final Point p : list) {
builder.append(p.toString()).append(";");
}
properties.setProperty("list", builder.toString());
When you load the properties you can simply do then
final List<Point> list = new ArrayList<Point>();
final String[] points = properties.getProperty("list").split(";");
for(final String p : points) {
final int x = Integer.parseInt(p.substring(0, p.indexOf(","));
final int y = Integer.parseInt(p.substring(p.indexOf(","), p.indexOf(")"));
list.add(new Point(x, y);
}

How to rename Columns via Lambda function - fasterXML

Im using the FasterXML library to parse my CSV file. The CSV file has the column names in its first line. Unfortunately I need the columns to be renamed. I have a lambda function for this, where I can pass the red value from the csv file in and get the new value.
my code looks like this, but does not work.
CsvSchema csvSchema =CsvSchema.emptySchema().withHeader();
ArrayList<HashMap<String, String>> result = new ArrayList<HashMap<String, String>>();
MappingIterator<HashMap<String,String>> it = new CsvMapper().reader(HashMap.class)
.with(csvSchema )
.readValues(new File(fileName));
while (it.hasNext())
result.add(it.next());
System.out.println("changing the schema columns.");
for (int i=0; i < csvSchema.size();i++) {
String name = csvSchema.column(i).getName();
String newName = getNewName(name);
csvSchema.builder().renameColumn(i, newName);
}
csvSchema.rebuild();
when i try to print out the columns later, they are still the same as in the top line of my CSV file.
Additionally I noticed, that csvSchema.size() equals 0 - why?
You could instead use uniVocity-parsers for that. The following solution streams the input rows to the output so you don't need to load everything in memory to then write your data back with new headers. It will be much faster:
public static void main(String ... args) throws Exception{
Writer output = new StringWriter(); // use a FileWriter for your case
CsvWriterSettings writerSettings = new CsvWriterSettings(); //many options here - check the documentation
final CsvWriter writer = new CsvWriter(output, writerSettings);
CsvParserSettings parserSettings = new CsvParserSettings(); //many options here as well
parserSettings.setHeaderExtractionEnabled(true); // indicates the first row of the input are headers
parserSettings.setRowProcessor(new AbstractRowProcessor(){
public void processStarted(ParsingContext context) {
writer.writeHeaders("Column A", "Column B", "... etc");
}
public void rowProcessed(String[] row, ParsingContext context) {
writer.writeRow(row);
}
public void processEnded(ParsingContext context) {
writer.close();
}
});
CsvParser parser = new CsvParser(parserSettings);
Reader reader = new StringReader("A,B,C\n1,2,3\n4,5,6"); // use a FileReader for your case
parser.parse(reader); // all rows are parsed and submitted to the RowProcessor implementation of the parserSettings.
System.out.println(output.toString());
//nothing else to do. All resources are closed automatically in case of errors.
}
You can easily select the columns by using parserSettings.selectFields("B", "A") in case you want to reorder/eliminate columns.
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).

Categories

Resources