Creating large .xls with SXSSF - java

I've got problem with exporting large .xls with SXSSF, saying large I mean 27 cols x 100 000 rows. Excel file is return on endpoint request. I've limited amount of rows - it can be 3x larger.
I'm using template engine for inserting data.
Original code
public StreamingOutput createStreamedExcelReport(Map<String, Object> params, String templateName, String[] columnsToHide) throws Exception {
try(InputStream is = ReportGenerator.class.getResourceAsStream(templateName)) {
assert is != null;
final Transformer transformer = PoiTransformer.createTransformer(is);
AreaBuilder areaBuilder = new XlsCommentAreaBuilder(transformer);
List<Area> xlsAreaList = areaBuilder.build();
Area xlsArea = xlsAreaList.get(0);
Context context = new PoiContext();
for(Map.Entry<String, Object> entry : params.entrySet()) {
context.putVar(entry.getKey(), entry.getValue());
}
xlsArea.applyAt(new CellRef("Sheet1!A1"), context);
xlsArea.processFormulas();
return new StreamingOutput() {
#Override
public void write(OutputStream out) throws IOException {
((PoiTransformer) transformer).getWorkbook().write(out);
}
};
}
}
SXSSF
public StreamingOutput createStreamedExcelReport(Map<String, Object> params, String templateName, String[] columnsToHide) throws Exception {
try(InputStream is = ReportGenerator.class.getResourceAsStream(templateName)) {
assert is != null;
Workbook workbook = WorkbookFactory.create(is);
final PoiTransformer transformer = PoiTransformer.createSxssfTransformer(workbook);
AreaBuilder areaBuilder = new XlsCommentAreaBuilder(transformer);
List<Area> xlsAreaList = areaBuilder.build();
Area xlsArea = xlsAreaList.get(0);
Context context = new PoiContext();
for(Map.Entry<String, Object> entry : params.entrySet()) {
context.putVar(entry.getKey(), entry.getValue());
}
xlsArea.applyAt(new CellRef("Sheet1!A1"), context);
xlsArea.processFormulas();
return new StreamingOutput() {
#Override
public void write(OutputStream out) throws IOException {
transformer.getWorkbook().write(out);
}
};
}
}
Export was running for 7 mins and I stopped server - it was too long. Acceptable time would be something like 1 min (max. 2 min). Most of that time CPU usage was about 60-80% and memory usage was constant. I tried also exporting 40 rows - it took something like 10 sec.
Maybe my function needs to be optimized.
Additional problem is that I'm inserting functions. In original code functions are replaced with values. In SXSSF version they are not.

I recommend you to disable formulas processing at this point because the formulas support for SXSSF version is limited and the memory consumption can be too high. The formulas support may be improved in the future JXLS releases.
So just remove xlsArea.processFormulas() call and add
context.getConfig().setIsFormulaProcessingRequired(false);
to disable tracking of cell references (as shown in Jxls doc) and see if it works.
Also please note that the template and the final report are expected to be in .xlsx format if you use SXSSF.

Related

Writing large of data to excel: GC overhead limit exceeded

I have a list of strings in read from MongoDB (~200k lines)
Then I want to write it to an excel file with Java code:
public class OutputToExcelUtils {
private static XSSFWorkbook workbook;
private static final String DATA_SEPARATOR = "!";
public static void clusterOutToExcel(List<String> data, String outputPath) {
workbook = new XSSFWorkbook();
FileOutputStream outputStream = null;
writeData(data, "Data");
try {
outputStream = new FileOutputStream(outputPath);
workbook.write(outputStream);
workbook.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void writeData(List<String> data, String sheetName) {
int rowNum = 0;
XSSFSheet sheet = workbook.getSheet(sheetName);
sheet = workbook.createSheet(sheetName);
for (int i = 0; i < data.size(); i++) {
System.out.println(sheetName + " Processing line: " + i);
int colNum = 0;
// Split into value of cell
String[] valuesOfLine = data.get(i).split(DATA_SEPERATOR);
Row row = sheet.createRow(rowNum++);
for (String valueOfCell : valuesOfLine) {
Cell cell = row.createCell(colNum++);
cell.setCellValue(valueOfCell);
}
}
}
}
Then I get an error:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead
limit exceeded at
org.apache.xmlbeans.impl.store.Cur$Locations.(Cur.java:497) at
org.apache.xmlbeans.impl.store.Locale.(Locale.java:168) at
org.apache.xmlbeans.impl.store.Locale.getLocale(Locale.java:242) at
org.apache.xmlbeans.impl.store.Locale.newInstance(Locale.java:593) at
org.apache.xmlbeans.impl.schema.SchemaTypeLoaderBase.newInstance(SchemaTypeLoaderBase.java:198)
at
org.apache.poi.POIXMLTypeLoader.newInstance(POIXMLTypeLoader.java:132)
at
org.openxmlformats.schemas.spreadsheetml.x2006.main.CTRst$Factory.newInstance(Unknown
Source) at
org.apache.poi.xssf.usermodel.XSSFRichTextString.(XSSFRichTextString.java:87)
at
org.apache.poi.xssf.usermodel.XSSFCell.setCellValue(XSSFCell.java:417)
at
ups.mongo.excelutil.OutputToExcelUtils.writeData(OutputToExcelUtils.java:80)
at
ups.mongo.excelutil.OutputToExcelUtils.clusterOutToExcel(OutputToExcelUtils.java:30)
at ups.mongodb.App.main(App.java:74)
Please give me some advice for that?
Thank you with my respect.
Update solution: Using SXSSWorkbook instead of XSSWorkbook
public class OutputToExcelUtils {
private static SXSSFWorkbook workbook;
private static final String DATA_SEPERATOR = "!";
public static void clusterOutToExcel(ClusterOutput clusterObject, ClusterOutputTrade clusterOutputTrade,
ClusterOutputDistance ClusterOutputDistance, String outputPath) {
workbook = new SXSSFWorkbook();
workbook.setCompressTempFiles(true);
FileOutputStream outputStream = null;
writeData(clusterOutputTrade.getTrades(), "Data");
try {
outputStream = new FileOutputStream(outputPath);
workbook.write(outputStream);
workbook.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void writeData(List<String> data, String sheetName) {
int rowNum = 0;
SXSSFSheet sheet = workbook.createSheet(sheetName);
sheet.setRandomAccessWindowSize(100); // For 100 rows saved in memory, it will flushed after wirtten to excel file
for (int i = 0; i < data.size(); i++) {
System.out.println(sheetName + " Processing line: " + i);
int colNum = 0;
// Split into value of cell
String[] valuesOfLine = data.get(i).split(DATA_SEPERATOR);
Row row = sheet.createRow(rowNum++);
for (String valueOfCell : valuesOfLine) {
Cell cell = row.createCell(colNum++);
cell.setCellValue(valueOfCell);
}
}
}
}
Your application is spending too much time doing garbage collection. This doesn't necessarily mean that it is running out of heap space; however, it spends too much time in GC relative to performing actual work, so the Java runtime shuts it down.
Try to enable throughput collection with the following JVM option:
-XX:+UseParallelGC
While you're at it, give your application as much heap space as possible:
-Xms????m
(where ???? stands for the amount of heap space in MB, e.g. -Xms8192m)
If this doesn't help, try to set a more lenient throughput goal with this option:
-XX:GCTimeRatio=19
This specifies that your application should do 19 times more useful work than GC-related work, i.e. it allows the GC to consume up to 5% of the processor time (I believe the stricter 1% default goal may be causing the above runtime error)
No guarantee that his will work. Can you check and post back so others who experience similar problems may benefit?
EDIT
Your root problem remains the fact that you need to hold the entire spreadhseet and all its related objects in memory while you are building it. Another solution would be to serialize the data, i.e. writing the actual spreadsheet file instead of constructing it in memory and saving it at the end. However, this requires reading up on the XLXS format and creating a custom solution.
Another option would be looking for a less memory-intensive library (if one exists). Possible alternatives to POI are JExcelAPI (open source) and Aspose.Cells (commercial).
I've used JExcelAPI years ago and had a positive experience (however, it appears that it is much less actively maintained than POI, so may no longer be the best choice).
EDIT 2
Looks like POI offers a streaming model (https://poi.apache.org/spreadsheet/how-to.html#sxssf), so this may be the best overall approach.
Well try to not load all the data in memory. Even if the binary representation of 200k lines is not that big the hidrated object in memory may be too big. Just as a hint if you have a Pojo each attribute in this pojo has a pointer and each pointer depending on if it is compressed or not compressed will take 4 or 8 bytes. This mean that if your data is a Pojo with 4 attributes only for the pointers you will be spending 200 000* 4bytes(or 8 bytes).
Theoreticaly you can increase the amount of memory to the JVM, but this is not a good solution, or more precisly it is not a good solution for a Live system. For a non interactive system might be fine.
Hint: Use -Xmx -Xms jvm arguments to control the heap size.
Instead of getting the entire list from the data, iterate line wise.
If too cumbersome, write the list to a file, and reread it linewise, for instance as a Stream<String>:
Path path = Files.createTempFile(...);
Files.write(path, list, StandarCharsets.UTF_8);
Files.lines(path, StandarCharsets.UTF_8)
.forEach(line -> { ... });
On the Excel side: though xlsx uses shared strings, if XSSF was done careless,
the following would use a single String instance for repeated string values.
public class StringCache {
private static final int MAX_LENGTH = 40;
private Map<String, String> identityMap = new Map<>();
public String cached(String s) {
if (s == null) {
return null;
}
if (s.length() > MAX_LENGTH) {
return s;
}
String t = identityMap.get(s);
if (t == null) {
t = s;
identityMap.put(t, t);
}
return t;
}
}
StringCache strings = new StringCache();
for (String valueOfCell : valuesOfLine) {
Cell cell = row.createCell(colNum++);
cell.setCellValue(strings.cached(valueOfCell));
}

Apache POI XSSFWorkbook memory leak

So I'm making a large-scale prime number generator in Java (with the help of JavaFX).
It uses the Apache POI library (I believe I'm using v3.17) to output the results to Excel spreadsheets.
The static methods for this exporting logic are held in a class called ExcelWriter. Basically, it iterates through an Arraylist arguments and populates a XSSFWorkbook with it's contents. Afterwords, a FileOutputStream is used to actually make it an excel file. Here are the relevant parts of it:
public class ExcelWriter {
//Configured JFileChooser to make alert before overwriting old files
private static JFileChooser fileManager = new JFileChooser(){
#Override
public void approveSelection(){
...
}
};
private static FileFilter filter = new FileNameExtensionFilter("Excel files","xlsx");
private static boolean hasBeenInitialized = false;
//Only method that can be called externally to access this class's functionality
public static <T extends Object> void makeSpreadsheet
(ArrayList<T> list, spreadsheetTypes type, int max, String title, JFXProgressBar progressBar)
throws IOException, InterruptedException{
progressBar.progressProperty().setValue(0);
switch (type){
case rightToLeftColumnLimit:
makeSpreadsheetRightToLeft(list, false, max, title, progressBar);
break;
...
}
}
static private <T extends Object> void makeSpreadsheetRightToLeft
(ArrayList<T> list, boolean maxRows, int max, String title, JFXProgressBar progressBar)
throws IOException, InterruptedException{
initializeChooser();
XSSFWorkbook workbook = new XSSFWorkbook();
XSSFSheet sheet = workbook.createSheet("Primus output");
int rowPointer = 0;
int columnPointer = 0;
double progressIncrementValue = 1/(double)list.size();
//Giving the spreadsheet an internal title also
Row row = sheet.createRow(0);
row.createCell(0).setCellValue(title);
row = sheet.createRow(++rowPointer);
//Making the sheet with a max column limit
if (!maxRows){
for (T number: list){
if (columnPointer == max){
columnPointer = 0;
row = sheet.createRow(++rowPointer);
}
Cell cell = row.createCell(columnPointer++);
progressBar.setProgress(progressBar.getProgress() + progressIncrementValue);
cell.setCellValue(number.toString());
}
}else {
//Making the sheet with a max row limit
int columnWrapIndex = (int)Math.ceil(list.size()/(float)max);
for (T number: list){
if (columnPointer == columnWrapIndex){
columnPointer = 0;
row = sheet.createRow(++rowPointer);
}
Cell cell = row.createCell(columnPointer++);
progressBar.setProgress(progressBar.getProgress() + progressIncrementValue);
cell.setCellValue(number.toString());
}
}
writeToExcel(workbook, progressBar);
}
static private void writeToExcel(XSSFWorkbook book, JFXProgressBar progressBar) throws IOException, InterruptedException{
//Exporting to Excel
int returnValue = fileManager.showSaveDialog(null);
if (returnValue == JFileChooser.APPROVE_OPTION){
File file = fileManager.getSelectedFile();
//Validation logic here
try{
FileOutputStream out = new FileOutputStream(file);
book.write(out);
out.close();
book.close();
}catch (FileNotFoundException ex){
}
}
}
}
Afterwards, my FXML document controller has a buttonListerner which calls:
longCalculationThread thread = new longCalculationThread(threadBundle);
thread.start();
The longcalculationthread creates a list of about a million prime numbers and Exports them to the ExcelWriter using this code:
private void publishResults() throws IOException, InterruptedException{
if (!longResults.isEmpty()){
if (shouldExport) {
progressText.setText("Exporting to Excel...");
ExcelWriter.makeSpreadsheet(longResults, exportType, excelExportLimit, getTitle(), progressBar);
}
}
The problem is, even though the variable holding the workbook in the XSSF workbook is a local variable to the methods it is used in, it doesn't get garbage collected afterwards.
It takes up like 1.5GB of RAM (I don't know why), and that data is only reallocated when another huge export is called (not for small exports).
My problem isn't really that the thing takes a lot of RAM, it's that even when the methods are completed the memory isn't GCed.
Here are some pictures of my NetBeans profiles:
Normal memory usage when making array of 1000000 primes:
Huge heap usage when making workbook
Memory Isn't reallocated when workbook ins't accessible anymore
Fluctuation seen when making a new workbook using the same static methods
I found the answer! I had to prompt the GC with System.gc(). I remember trying this out earlier, however I must have put it in a pace where the workbook was still accessible and hence couldn't be GCed.

Stackoverflowerror while using distinct in apache spark

I use Spark 2.0.1.
I am trying to find distinct values in a JavaRDD as below
JavaRDD<String> distinct_installedApp_Ids = filteredInstalledApp_Ids.distinct();
I see that this line is throwing the below exception
Exception in thread "main" java.lang.StackOverflowError
at org.apache.spark.rdd.RDD.checkpointRDD(RDD.scala:226)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:84)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
..........
The same stacktrace is repeated again and again.
The input filteredInstalledApp_Ids has large input with millions of records.Will thh issue be the number of records or is there a efficient way to find distinct values in JavaRDD. Any help would be much appreciated. Thanks in advance. Cheers.
Edit 1:
Adding the filter method
JavaRDD<String> filteredInstalledApp_Ids = installedApp_Ids
.filter(new Function<String, Boolean>() {
#Override
public Boolean call(String v1) throws Exception {
return v1 != null;
}
}).cache();
Edit 2:
Added the method used to generate installedApp_Ids
public JavaRDD<String> getIdsWithInstalledApps(String inputPath, JavaSparkContext sc,
JavaRDD<String> installedApp_Ids) {
JavaRDD<String> appIdsRDD = sc.textFile(inputPath);
try {
JavaRDD<String> appIdsRDD1 = appIdsRDD.map(new Function<String, String>() {
#Override
public String call(String t) throws Exception {
String delimiter = "\t";
String[] id_Type = t.split(delimiter);
StringBuilder temp = new StringBuilder(id_Type[1]);
if ((temp.indexOf("\"")) != -1) {
String escaped = temp.toString().replace("\\", "");
escaped = escaped.replace("\"{", "{");
escaped = escaped.replace("}\"", "}");
temp = new StringBuilder(escaped);
}
// To remove empty character in the beginning of a
// string
JSONObject wholeventObj = new JSONObject(temp.toString());
JSONObject eventJsonObj = wholeventObj.getJSONObject("eventData");
int appType = eventJsonObj.getInt("appType");
if (appType == 1) {
try {
return (String.valueOf(appType));
} catch (JSONException e) {
return null;
}
}
return null;
}
}).cache();
if (installedApp_Ids != null)
return sc.union(installedApp_Ids, appIdsRDD1);
else
return appIdsRDD1;
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
I assume the main dataset is in inputPath. It appears that it's a comma-separated file with JSON-encoded values.
I think you could make your code a bit simpler by combination of Spark SQL's DataFrames and from_json function. I'm using Scala and leave converting the code to Java as a home exercise :)
The lines where you load a inputPath text file and the line parsing itself can be as simple as the following:
import org.apache.spark.sql.SparkSession
val spark: SparkSession = ...
val dataset = spark.read.csv(inputPath)
You can display the content using show operator.
dataset.show(truncate = false)
You should see the JSON-encoded lines.
It appears that the JSON lines contain eventData and appType fields.
val jsons = dataset.withColumn("asJson", from_json(...))
See functions object for reference.
With JSON lines, you can select the fields of your interest:
val apptypes = jsons.select("eventData.appType")
And then union it with installedApp_Ids.
I'm sure the code gets easier to read (and hopefully to write too). The migration will give you extra optimizations that you may or may not be able to write yourself using assembler-like RDD API.
And the best is that filtering out nulls is as simple as using na operator that gives DataFrameNaFunctions like drop. I'm sure you'll like them.
It does not necessarily answer your initial question, but this java.lang.StackOverflowError might get away just by doing the code migration and the code gets easier to maintain, too.

Maps and Jxls - process different excel sheets separately with XLSTransformer

I have an excel template with two sheets that I want to populate through XLSTransformer. The data are different for the two sheets (list of different lenghts, with one taking results of a table - see code below), meaning that I cannot pass them through one map.
I've tried with two maps :
//for the first excel sheet
Map<String, List<ListData>> beanParams = new HashMap<String, List<ListData>>();
beanParams.put("rs", rs);
//for the second one
Map<String, List<DetailResult>> beanParams2 = new HashMap<String, List<DetailResult>>();
beanParams2.put("detRes", detRes);
XLSTransformer former = new XLSTransformer();
former.transformXLS(srcFilePath, beanParams, destFilePath);
former.transformXLS(srcFilePath, beanParams2, destFilePath);
The lists look like this :
List<Results> rs = new ArrayList<Results>();
Results s1 = new Results(compteurFemme, compteurHomme, compteurTot, averageTempsFemme, averageTempsHomme, averageTempsTot);
rs.add(s1);
List<ResultsDetails> detRes = new ArrayList<ResultsDetails>();
for(int i=0; i<tableau.getRowCount(); i++){
ResultsDetails newRes = new ResultsDetails(item[i], rep[i], justefaux[i], tempsrep[i]);
item[i]=((DataIdGenre) tableau.getModel()).getValueAt(i, 2).toString();
rep[i]=((DataIdGenre) tableau.getModel()).getValueAt(i, 3).toString();
justefaux[i]=((DataIdGenre) tableau.getModel()).getValueAt(i, 4).toString();
tempsrep[i]=((DataIdGenre) tableau.getModel()).getValueAt(i, 5).toString();
detRes.add(newRes);
}
Individually, the two exports are working on the respective sheet, but together, the second erases the first one.
I then try to use some kind of multimap, with one key (the one I put in excel) for two values
Map<String, List<Object>> hm = new HashMap<String, List<Object>>();
List<Object> values = new ArrayList<Object>();
values.add(rs);
values.add(detRes);
hm.put("det", values);
XLSTransformer former = new XLSTransformer();
former.transformXLS(srcFilePath, hm, destFilePath);
But I got an error telling me that the datas were inaccessible.
So my question is, is there a way to deal directly with different sheets when using XLSTransformer ?
Ok, I've come up with something, through a temporary file :
private void exportDataDet(File file) throws ParseException, IOException, ParsePropertyException, InvalidFormatException {
List<ResultsDetails> detRes = generateResultsDetails();
try(InputStream is = IdGenre.class.getResourceAsStream("/xlsTemplates/IdGenre/IdGenre_20-29_+12.xlsx")) {
try (OutputStream os = new FileOutputStream("d:/IdGenreYOLO.xlsx")) {
Context context = new Context();
context.putVar("detRes", detRes);
JxlsHelper.getInstance().processTemplate(is, os, context);
}
}
}
private void exportData(File file) throws ParseException, IOException, ParsePropertyException, InvalidFormatException {
List<Results> rs = generateResults();
try{
String srcFilePath = "d:/IdGenreYOLO.xlsx";
String destFilePath = "d:/IdGenreRes.xlsx";
Map<String, List<Results>> beanParams = new HashMap<String, List<Results>>();
beanParams.put("rs", rs);
XLSTransformer former = new XLSTransformer();
former.transformXLS(srcFilePath, beanParams, destFilePath);
}
finally{
Path path = Paths.get("d:/IdGenreYOLO.xlsx");
try{
Files.delete(path);
}
finally{}
}
}
It's probably not the best solution, even more because I have to add other data that could not fit in the two existing lists, and - at least for now - it will be done through a second temp file.
I haven't use the same method twice, because XLSTransformer was the only one I was able to make operative within the excel template I was given (which I can't modify).
I'm open to any suggestion.

How to rename Columns via Lambda function - fasterXML

Im using the FasterXML library to parse my CSV file. The CSV file has the column names in its first line. Unfortunately I need the columns to be renamed. I have a lambda function for this, where I can pass the red value from the csv file in and get the new value.
my code looks like this, but does not work.
CsvSchema csvSchema =CsvSchema.emptySchema().withHeader();
ArrayList<HashMap<String, String>> result = new ArrayList<HashMap<String, String>>();
MappingIterator<HashMap<String,String>> it = new CsvMapper().reader(HashMap.class)
.with(csvSchema )
.readValues(new File(fileName));
while (it.hasNext())
result.add(it.next());
System.out.println("changing the schema columns.");
for (int i=0; i < csvSchema.size();i++) {
String name = csvSchema.column(i).getName();
String newName = getNewName(name);
csvSchema.builder().renameColumn(i, newName);
}
csvSchema.rebuild();
when i try to print out the columns later, they are still the same as in the top line of my CSV file.
Additionally I noticed, that csvSchema.size() equals 0 - why?
You could instead use uniVocity-parsers for that. The following solution streams the input rows to the output so you don't need to load everything in memory to then write your data back with new headers. It will be much faster:
public static void main(String ... args) throws Exception{
Writer output = new StringWriter(); // use a FileWriter for your case
CsvWriterSettings writerSettings = new CsvWriterSettings(); //many options here - check the documentation
final CsvWriter writer = new CsvWriter(output, writerSettings);
CsvParserSettings parserSettings = new CsvParserSettings(); //many options here as well
parserSettings.setHeaderExtractionEnabled(true); // indicates the first row of the input are headers
parserSettings.setRowProcessor(new AbstractRowProcessor(){
public void processStarted(ParsingContext context) {
writer.writeHeaders("Column A", "Column B", "... etc");
}
public void rowProcessed(String[] row, ParsingContext context) {
writer.writeRow(row);
}
public void processEnded(ParsingContext context) {
writer.close();
}
});
CsvParser parser = new CsvParser(parserSettings);
Reader reader = new StringReader("A,B,C\n1,2,3\n4,5,6"); // use a FileReader for your case
parser.parse(reader); // all rows are parsed and submitted to the RowProcessor implementation of the parserSettings.
System.out.println(output.toString());
//nothing else to do. All resources are closed automatically in case of errors.
}
You can easily select the columns by using parserSettings.selectFields("B", "A") in case you want to reorder/eliminate columns.
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).

Categories

Resources