StatefulBeanToCsv with Column headers - java

I am using opencsv-4.0 to write a csv file and I need to add column headers in output file.
Here is my code.
public static void buildProductCsv(final List<Product> product,
final String filePath) {
try {
Writer writer = new FileWriter(filePath);
// mapping of columns with their positions
ColumnPositionMappingStrategy<Product> mappingStrategy = new ColumnPositionMappingStrategy<Product>();
// Set mappingStrategy type to Product Type
mappingStrategy.setType(Product.class);
// Fields in Product Bean
String[] columns = new String[] { "productCode", "MFD", "EXD" };
// Setting the colums for mappingStrategy
mappingStrategy.setColumnMapping(columns);
StatefulBeanToCsvBuilder<Product> builder = new StatefulBeanToCsvBuilder<Product>(writer);
StatefulBeanToCsv<Product> beanWriter = builder.withMappingStrategy(mappingStrategy).build();
// Writing data to csv file
beanWriter.write(product);
writer.close();
log.info("Your csv file has been generated!");
} catch (Exception ex) {
log.warning("Exception: " + ex.getMessage());
}
}
Above code create a csv file with data. But it not include column headers in that file.
How could I add column headers to output csv?

ColumnPositionMappingStrategy#generateHeader returns empty array
/**
* This method returns an empty array.
* The column position mapping strategy assumes that there is no header, and
* thus it also does not write one, accordingly.
* #return An empty array
*/
#Override
public String[] generateHeader() {
return new String[0];
}
If you remove MappingStrategy from BeanToCsv builder
// replace
StatefulBeanToCsv<Product> beanWriter = builder.withMappingStrategy(mappingStrategy).build();
// with
StatefulBeanToCsv<Product> beanWriter = builder.build();
It will write Product's class members as CSV header
If your Product class members names are
"productCode", "MFD", "EXD"
This should be the right solution
Else, add #CsvBindByName annotation
import com.opencsv.bean.CsvBindByName;
import com.opencsv.bean.StatefulBeanToCsv;
import com.opencsv.bean.StatefulBeanToCsvBuilder;
import java.io.FileWriter;
import java.io.Writer;
import java.util.ArrayList;
import java.util.List;
public class CsvTest {
public static void main(String[] args) throws Exception {
Writer writer = new FileWriter(fileName);
StatefulBeanToCsvBuilder<Product> builder = new StatefulBeanToCsvBuilder<>(writer);
StatefulBeanToCsv<Product> beanWriter = builder.build();
List<Product> products = new ArrayList<>();
products.add(new Product("1", "11", "111"));
products.add(new Product("2", "22", "222"));
products.add(new Product("3", "33", "333"));
beanWriter.write(products);
writer.close();
}
public static class Product {
#CsvBindByName(column = "productCode")
String id;
#CsvBindByName(column = "MFD")
String member2;
#CsvBindByName(column = "EXD")
String member3;
Product(String id, String member2, String member3) {
this.id = id;
this.member2 = member2;
this.member3 = member3;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getMember2() {
return member2;
}
public void setMember2(String member2) {
this.member2 = member2;
}
public String getMember3() {
return member3;
}
public void setMember3(String member3) {
this.member3 = member3;
}
}
}
Output:
"EXD","MFD","PRODUCTCODE"
"111","11","1"
"222","22","2"
"333","33","3"
Pay attention; class, getters & setters needs to be public due to the use of Reflection by OpenCSV library

You can append by annotation
public void export(List<YourObject> list, PrintWriter writer) throws Exception {
writer.append( buildHeader( YourObject.class ) );
StatefulBeanToCsvBuilder<YourObject> builder = new StatefulBeanToCsvBuilder<>( writer );
StatefulBeanToCsv<YourObject> beanWriter = builder.build();
beanWriter.write( mapper.map( list ) );
writer.close();
}
private String buildHeader(Class<YourObject> clazz) {
return Arrays.stream( clazz.getDeclaredFields() )
.filter( f -> f.getAnnotation( CsvBindByPosition.class ) != null
&& f.getAnnotation( CsvBindByName.class ) != null )
.sorted( Comparator.comparing( f -> f.getAnnotation( CsvBindByPosition.class ).position() ) )
.map( f -> f.getAnnotation( CsvBindByName.class ).column() )
.collect( Collectors.joining( "," ) ) + "\n";
}
#Getter
#Setter
#NoArgsConstructor
#AllArgsConstructor
public class YourObject {
#CsvBindByPosition(position = 0)
#CsvBindByName(column = "A")
private Long a;
#CsvBindByPosition(position = 1)
#CsvBindByName(column = "B")
private String b;
#CsvBindByPosition(position = 2)
#CsvBindByName(column = "C")
private String c;
}

I may have missed something obvious here but couldn't you just append your header String to the writer object?
Writer writer = new FileWriter(filePath);
writer.append("header1, header2, header3, ...etc \n");
// This will be followed by your code with BeanToCsvBuilder
// Note: the terminating \n might differ pending env.

Use a HeaderColumnNameMappingStrategy for reading, then use the same strategy for writing. "Same" in this case meaning not just the same class, but really the same object.
From the javadoc of StatefulBeanToCsvBuilder.withMappingStrategy:
It is perfectly legitimate to read a CSV source, take the mapping strategy from the read operation, and pass it in to this method for a write operation. This conserves some processing time, but, more importantly, preserves header ordering.
This way you will get a CSV including headers, with columns in the same order as the original CSV.
Worked for me using OpenCSV 5.4.

Use a custom strategy
static class CustomStrategy<T> extends ColumnPositionMappingStrategy<T> {
public String[] generateHeader() {
return this.getColumnMapping();
}
}
and on CSV object that you write do not forget to provide both
#CsvBindByName(column="UID")
#CsvBindByPosition(position = 3)

You could also override the generateHeaders method and return the column mapping that is set, which will have header row in csv
ColumnPositionMappingStrategy<Product> mappingStrategy = new ColumnPositionMappingStrategy<Product>() {
#Override
public String[] generateHeader(Product bean) throws CsvRequiredFieldEmptyException {
return this.getColumnMapping();
}
};

Related

Why opencsv capitalizing csv headers while writing to file

While writing Beans to CSV file by using OpenCSV 4.6, all the headers are changing to uppercase. Eventhough bean has #CsvBindByName annotation it is changing to uppercase.
Java Bean:
public class ProjectInfo implements Serializable {
#CsvBindByName(column = "ProjectName",required = true)
private String projectName;
#CsvBindByName(column = "ProjectCode",required = true)
private String projectCode;
#CsvBindByName(column = "Visibility",required = true)
private String visibility;
//setters and getters
}
Main method
public static void main(String[] args) throws IOException {
Collection<Serializable> projectInfos = getProjectsInfo();
try(BufferedWriter writer = new BufferedWriter(new FileWriter("test.csv"))){
StatefulBeanToCsvBuilder builder = new StatefulBeanToCsvBuilder(writer);
StatefulBeanToCsv beanWriter = builder
.withSeparator(';')
.build();
try {
beanWriter.write(projectInfos.iterator());
writer.flush();
} catch (CsvDataTypeMismatchException | CsvRequiredFieldEmptyException e) {
throw new RuntimeException("Failed to download admin file");
}
}
}
Expected Result:
"ProjectCode";"ProjectName";"Visibility"
"ANY";"Country DU";"1"
"STD";"Standard";"1"
"TST";"Test";"1"
"CMM";"CMMTest";"1"
Acutal Result:
"PROJECTCODE";"PROJECTNAME";"VISIBILITY"
"ANY";"Country DU";"1"
"STD";"Standard";"1"
"TST";"Test";"1"
"CMM";"CMMTest";"1"
I don't have option to use ColumnMappingStrategy because I have to build this method as a generic solution.
can anyone suggest me how to write the headers as it is?
It happens, because the code in HeaderColumnNameMappingStrategy uses toUpperCase() for storing and retrieving the field names.
You could use the HeaderColumnNameTranslateMappingStrategy instead and create the mapping by reflection.
public class AnnotationStrategy extends HeaderColumnNameTranslateMappingStrategy
{
public AnnotationStrategy(Class<?> clazz)
{
Map<String,String> map=new HashMap<>();
//To prevent the column sorting
List<String> originalFieldOrder=new ArrayList<>();
for(Field field:clazz.getDeclaredFields())
{
CsvBindByName annotation = field.getAnnotation(CsvBindByName.class);
if(annotation!=null)
{
map.put(annotation.column(),annotation.column());
originalFieldOrder.add(annotation.column());
}
}
setType(clazz);
setColumnMapping(map);
//Order the columns as they were created
setColumnOrderOnWrite((a,b) -> Integer.compare(originalFieldOrder.indexOf(a), originalFieldOrder.indexOf(b)));
}
#Override
public String[] generateHeader(Object bean) throws CsvRequiredFieldEmptyException
{
String[] result=super.generateHeader(bean);
for(int i=0;i<result.length;i++)
{
result[i]=getColumnName(i);
}
return result;
}
}
And, assuming that there is only one class of items (and always at least one item), the creation of beanWriter has to be expanded:
StatefulBeanToCsv beanWriter = builder.withSeparator(';')
.withMappingStrategy(new AnnotationStrategy(projectInfos.iterator().next().getClass()))
.build();
Actually, HeaderColumnNameMappingStrategy uses toUpperCase() for storing and retrieving the field names.
In order to use custom field name you have to annotate you field with #CsvBindByName
#CsvBindByName(column = "Partner Code" )
private String partnerCode;
By default it will be capitalized to PARTNER CODE because of the above reason.
so, in order to take control over it we have to write a class implementing HeaderColumnNameTranslateMappingStrategy. With csv 5.0 and java8 i have implemented like this
import java.lang.reflect.Field;
import java.util.HashMap;
import java.util.Map;
import com.opencsv.bean.CsvBindByName;
import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy;
import com.opencsv.exceptions.CsvRequiredFieldEmptyException;
public class AnnotationStrategy<T> extends HeaderColumnNameTranslateMappingStrategy<T> {
Map<String, String> columnMap = new HashMap<>();
public AnnotationStrategy(Class<? extends T> clazz) {
for (Field field : clazz.getDeclaredFields()) {
CsvBindByName annotation = field.getAnnotation(CsvBindByName.class);
if (annotation != null) {
columnMap.put(field.getName().toUpperCase(), annotation.column());
}
}
setType(clazz);
}
#Override
public String getColumnName(int col) {
String name = headerIndex.getByPosition(col);
return name;
}
public String getColumnName1(int col) {
String name = headerIndex.getByPosition(col);
if(name != null) {
name = columnMap.get(name);
}
return name;
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] result = super.generateHeader(bean);
for (int i = 0; i < result.length; i++) {
result[i] = getColumnName1(i);
}
return result;
}
}
I have tried other solutions but it doesn't work when the property name and column name are not the same.
I am using 5.6. My solution is to reuse the strategy.
public class CsvRow {
#CsvBindByName(column = "id")
private String id;
// Property name and column name are different
#CsvBindByName(column = "country_code")
private String countryCode;
}
// We are going to reuse this strategy
HeaderColumnNameMappingStrategy<CsvRow> strategy = new HeaderColumnNameMappingStrategy<>();
strategy.setType(CsvRow.class);
// Build the header line which respects the declaration order
// So its value will be "id,country_code"
String headerLine = Arrays.stream(CsvRow.class.getDeclaredFields())
.map(field -> field.getAnnotation(CsvBindByName.class))
.filter(Objects::nonNull)
.map(CsvBindByName::column)
.collect(Collectors.joining(","));
// Let the library to initialize column details in the strategy
try (StringReader stringReader = new StringReader(headerLine);
CSVReader reader = new CSVReader(stringReader)) {
CsvToBean<CsvRow> csv = new CsvToBeanBuilder<CsvRow>(reader)
.withType(CsvRow.class)
.withMappingStrategy(strategy)
.build();
for (CsvRow csvRow : csv) {}
}
The strategy is ready for writing csv file.
try (OutputStream outputStream = Files.newOutputStream(Path.of("test.csv"));
OutputStreamWriter writer = new OutputStreamWriter(outputStream)) {
StatefulBeanToCsv<CsvRow> csv = new StatefulBeanToCsvBuilder<CsvRow>(writer)
.withMappingStrategy(strategy)
.withThrowExceptions(true)
.build();
csv.write(csvRows);
}
Using opencsv 5.0 and Java 8, I had to modify AnnotationStrategy class code as follows to had it compiled :
import java.lang.reflect.Field;
import java.util.HashMap;
import java.util.Map;
import com.opencsv.bean.CsvBindByName;
import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy;
import com.opencsv.exceptions.CsvRequiredFieldEmptyException;
public class AnnotationStrategy<T> extends HeaderColumnNameTranslateMappingStrategy<T> {
public AnnotationStrategy(Class<? extends T> clazz) {
Map<String, String> map = new HashMap<>();
for (Field field : clazz.getDeclaredFields()) {
CsvBindByName annotation = field.getAnnotation(CsvBindByName.class);
if (annotation != null) {
map.put(annotation.column(), annotation.column());
}
}
setType(clazz);
setColumnMapping(map);
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] result = super.generateHeader(bean);
for (int i = 0; i < result.length; i++) {
result[i] = getColumnName(i);
}
return result;
}
}

how to write String array in a csv column

public class bean {
private String name;
private String[] friends;
}
public void createSuperCSVFile(final List<VariantTO> data,
final File file) throws IOException {
ICsvBeanWriter beanWriter = null;
try {
String[] header = {"name", "friends"};
beanWriter = new CsvBeanWriter(new FileWriter(file), TAB_PREFERENCE);
// write the header
beanWriter.writeHeader(header);
for (Object object: data) {
beanWriter.write(object, header);
}
} finally {
if( beanWriter != null ) {
beanWriter.close();
}
}
}
I am using supercsv to write a POJO with an attribute containing string array to csv. The CsvBeanWriter simply writes the object address instead of its value in the column. Is there any settings to map the value correctly?
EXPECTED
name friends
john dimitry,olaf,nett
ACTUAL
name friends
john [Ljava.lang.String;#50ccb5a3
The solution was to write my own cell processor. I wrote a String[] processor, which returns a comma separated value as a string.
final CellProcessor[] PROCESSORS = new CellProcessor[] {
new NotNull(),
new ParseStringArray()
};
beanWriter = new CsvBeanWriter(new FileWriter(file), TAB_PREFERENCE);
for (Object object: data) {
beanWriter.write(object, header, PROCESSORS);
}
class ParseStringArray extends CellProcessorAdaptor implements StringCellProcessor {
#Override
public <T> T execute(final Object value, final CsvContext context) {
validateInputNotNull(value, context);
String result;
if (value instanceof String[]) {
result = StringUtils.join((String[]) value, ",");
} else {
final String actualClassName = value.getClass().getName();
throw new SuperCsvCellProcessorException(String.format(
"the input value should be of type String array but is of type %s", actualClassName), context, this);
}
return next.execute(result, context);
}
}

How to specify column order when using opencsv beanwriter? [duplicate]

I have created a MappingsBean class where all the columns of the CSV file are specified. Next I parse XML files and create a list of mappingbeans. Then I write that data into CSV file as report.
I am using following annotations:
public class MappingsBean {
#CsvBindByName(column = "TradeID")
#CsvBindByPosition(position = 0)
private String tradeId;
#CsvBindByName(column = "GWML GUID", required = true)
#CsvBindByPosition(position = 1)
private String gwmlGUID;
#CsvBindByName(column = "MXML GUID", required = true)
#CsvBindByPosition(position = 2)
private String mxmlGUID;
#CsvBindByName(column = "GWML File")
#CsvBindByPosition(position = 3)
private String gwmlFile;
#CsvBindByName(column = "MxML File")
#CsvBindByPosition(position = 4)
private String mxmlFile;
#CsvBindByName(column = "MxML Counterparty")
#CsvBindByPosition(position = 5)
private String mxmlCounterParty;
#CsvBindByName(column = "GWML Counterparty")
#CsvBindByPosition(position = 6)
private String gwmlCounterParty;
}
And then I use StatefulBeanToCsv class to write into CSV file:
File reportFile = new File(reportOutputDir + "/" + REPORT_FILENAME);
Writer writer = new PrintWriter(reportFile);
StatefulBeanToCsv<MappingsBean> beanToCsv = new
StatefulBeanToCsvBuilder(writer).build();
beanToCsv.write(makeFinalMappingBeanList());
writer.close();
The problem with this approach is that if I use #CsvBindByPosition(position = 0) to control
position then I am not able to generate column names. If I use #CsvBindByName(column = "TradeID") then I am not able to set position of the columns.
Is there a way where I can use both annotations, so that I can create CSV files with column headers and also control column position?
Regards,
Vikram Pathania
I've had similar problem. AFAIK there is no build-in functionality in OpenCSV that will allow to write bean to CSV with custom column names and ordering.
There are two main MappingStrategyies that are available in OpenCSV out of the box:
HeaderColumnNameMappingStrategy: that allows to map CVS file columns to bean fields based on custom name; when writing bean to CSV this allows to change column header name but we have no control on column order
ColumnPositionMappingStrategy: that allows to map CSV file columns to bean fields based on column ordering; when writing bean to CSV we can control column order but we get an empty header (implementation returns new String[0] as a header)
The only way I found to achieve both custom column names and ordering is to write your custom MappingStrategy.
First solution: fast and easy but hardcoded
Create custom MappingStrategy:
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
private static final String[] HEADER = new String[]{"TradeID", "GWML GUID", "MXML GUID", "GWML File", "MxML File", "MxML Counterparty", "GWML Counterparty"};
#Override
public String[] generateHeader() {
return HEADER;
}
}
And use it in StatefulBeanToCsvBuilder:
final CustomMappingStrategy<MappingsBean> mappingStrategy = new CustomMappingStrategy<>();
mappingStrategy.setType(MappingsBean.class);
final StatefulBeanToCsv<MappingsBean> beanToCsv = new StatefulBeanToCsvBuilder<MappingsBean>(writer)
.withMappingStrategy(mappingStrategy)
.build();
beanToCsv.write(makeFinalMappingBeanList());
writer.close()
In MappingsBean class we left CsvBindByPosition annotations - to control ordering (in this solution CsvBindByName annotations are not needed). Thanks to custom mapping strategy the header column names are included in resulting CSV file.
The downside of this solution is that when we change column ordering through CsvBindByPosition annotation we have to manually change also HEADER constant in our custom mapping strategy.
Second solution: more flexible
The first solution works, but it was not good for me. Based on build-in implementations of MappingStrategy I came up with yet another implementation:
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader() {
final int numColumns = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader();
}
header = new String[numColumns + 1];
BeanField beanField;
for (int i = 0; i <= numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null || beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
You can use this custom strategy in StatefulBeanToCsvBuilder exactly this same as in the first solution (remember to invoke mappingStrategy.setType(MappingsBean.class);, otherwise this solution will not work).
Currently our MappingsBean has to contain both CsvBindByName and CsvBindByPosition annotations. The first to give header column name and the second to create ordering of columns in the output CSV header. Now if we change (using annotations) either column name or ordering in MappingsBean class - that change will be reflected in output CSV file.
Corrected above answer to match with newer version.
package csvpojo;
import org.apache.commons.lang3.StringUtils;
import com.opencsv.bean.BeanField;
import com.opencsv.bean.ColumnPositionMappingStrategy;
import com.opencsv.bean.CsvBindByName;
import com.opencsv.exceptions.CsvRequiredFieldEmptyException;
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
super.setColumnMapping(new String[ FieldUtils.getAllFields(bean.getClass()).length]);
final int numColumns = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader(bean);
}
String[] header = new String[numColumns + 1];
BeanField<T> beanField;
for (int i = 0; i <= numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField<T> beanField) {
if (beanField == null || beanField.getField() == null
|| beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField()
.getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
Then call this to generate CSV. I have used Visitors as my POJO to populate, update wherever necessary.
CustomMappingStrategy<Visitors> mappingStrategy = new CustomMappingStrategy<>();
mappingStrategy.setType(Visitors.class);
// writing sample
List<Visitors> beans2 = new ArrayList<Visitors>();
Visitors v = new Visitors();
v.set_1_firstName(" test1");
v.set_2_lastName("lastname1");
v.set_3_visitsToWebsite("876");
beans2.add(v);
v = new Visitors();
v.set_1_firstName(" firstsample2");
v.set_2_lastName("lastname2");
v.set_3_visitsToWebsite("777");
beans2.add(v);
Writer writer = new FileWriter("G://output.csv");
StatefulBeanToCsv<Visitors> beanToCsv = new StatefulBeanToCsvBuilder<Visitors>(writer)
.withMappingStrategy(mappingStrategy).withSeparator(',').withApplyQuotesToAll(false).build();
beanToCsv.write(beans2);
writer.close();
My bean annotations looks like this
#CsvBindByName (column = "First Name", required = true)
#CsvBindByPosition(position=1)
private String firstName;
#CsvBindByName (column = "Last Name", required = true)
#CsvBindByPosition(position=0)
private String lastName;
I wanted to achieve bi-directional import/export - to be able to import generated CSV back to POJO and visa versa.
I was not able to use #CsvBindByPosition for this, because in this case - ColumnPositionMappingStrategy was selected automatically. Per documents: this strategy requires that the file does NOT have a header.
What I've used to achieve the goal:
HeaderColumnNameMappingStrategy
mappingStrategy.setColumnOrderOnWrite(Comparator<String> writeOrder)
CsvUtils to read/write csv
import com.opencsv.CSVWriter;
import com.opencsv.bean.*;
import org.springframework.web.multipart.MultipartFile;
import java.io.*;
import java.util.List;
public class CsvUtils {
private CsvUtils() {
}
public static <T> String convertToCsv(List<T> entitiesList, MappingStrategy<T> mappingStrategy) throws Exception {
try (Writer writer = new StringWriter()) {
StatefulBeanToCsv<T> beanToCsv = new StatefulBeanToCsvBuilder<T>(writer)
.withMappingStrategy(mappingStrategy)
.withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.build();
beanToCsv.write(entitiesList);
return writer.toString();
}
}
#SuppressWarnings("unchecked")
public static <T> List<T> convertFromCsv(MultipartFile file, Class clazz) throws IOException {
try (Reader reader = new BufferedReader(new InputStreamReader(file.getInputStream()))) {
CsvToBean<T> csvToBean = new CsvToBeanBuilder<T>(reader).withType(clazz).build();
return csvToBean.parse();
}
}
}
POJO for import/export
public class LocalBusinessTrainingPairDTO {
//this is used for CSV columns ordering on exporting LocalBusinessTrainingPairs
public static final String[] FIELDS_ORDER = {"leftId", "leftName", "rightId", "rightName"};
#CsvBindByName(column = "leftId")
private int leftId;
#CsvBindByName(column = "leftName")
private String leftName;
#CsvBindByName(column = "rightId")
private int rightId;
#CsvBindByName(column = "rightName")
private String rightName;
// getters/setters omitted, do not forget to add them
}
Custom comparator for predefined String ordering:
public class OrderedComparatorIgnoringCase implements Comparator<String> {
private List<String> predefinedOrder;
public OrderedComparatorIgnoringCase(String[] predefinedOrder) {
this.predefinedOrder = new ArrayList<>();
for (String item : predefinedOrder) {
this.predefinedOrder.add(item.toLowerCase());
}
}
#Override
public int compare(String o1, String o2) {
return predefinedOrder.indexOf(o1.toLowerCase()) - predefinedOrder.indexOf(o2.toLowerCase());
}
}
Ordered writing for POJO (answer to initial question)
public static void main(String[] args) throws Exception {
List<LocalBusinessTrainingPairDTO> localBusinessTrainingPairsDTO = new ArrayList<>();
LocalBusinessTrainingPairDTO localBusinessTrainingPairDTO = new LocalBusinessTrainingPairDTO();
localBusinessTrainingPairDTO.setLeftId(1);
localBusinessTrainingPairDTO.setLeftName("leftName");
localBusinessTrainingPairDTO.setRightId(2);
localBusinessTrainingPairDTO.setRightName("rightName");
localBusinessTrainingPairsDTO.add(localBusinessTrainingPairDTO);
//Creating HeaderColumnNameMappingStrategy
HeaderColumnNameMappingStrategy<LocalBusinessTrainingPairDTO> mappingStrategy = new HeaderColumnNameMappingStrategy<>();
mappingStrategy.setType(LocalBusinessTrainingPairDTO.class);
//Setting predefined order using String comparator
mappingStrategy.setColumnOrderOnWrite(new OrderedComparatorIgnoringCase(LocalBusinessTrainingPairDTO.FIELDS_ORDER));
String csv = convertToCsv(localBusinessTrainingPairsDTO, mappingStrategy);
System.out.println(csv);
}
Read exported CSV back to POJO (addition to original answer)
Important: CSV can be unordered, as we are still using binding by name:
public static void main(String[] args) throws Exception {
//omitted code from writing
String csv = convertToCsv(localBusinessTrainingPairsDTO, mappingStrategy);
//Exported CSV should be compatible for further import
File temp = File.createTempFile("tempTrainingPairs", ".csv");
temp.deleteOnExit();
BufferedWriter bw = new BufferedWriter(new FileWriter(temp));
bw.write(csv);
bw.close();
MultipartFile multipartFile = new MockMultipartFile("tempTrainingPairs.csv", new FileInputStream(temp));
List<LocalBusinessTrainingPairDTO> localBusinessTrainingPairDTOList = convertFromCsv(multipartFile, LocalBusinessTrainingPairDTO.class);
}
To conclude:
We can read CSV to POJO, regardless of column order - because we are
using #CsvBindByName
We can control columns order on write using
custom comparator
In the latest version the solution of #Sebast26 does no longer work. However the basic is still very good. Here is a working solution with v5.0
import com.opencsv.bean.BeanField;
import com.opencsv.bean.ColumnPositionMappingStrategy;
import com.opencsv.bean.CsvBindByName;
import com.opencsv.exceptions.CsvRequiredFieldEmptyException;
import org.apache.commons.lang3.StringUtils;
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
final int numColumns = getFieldMap().values().size();
super.generateHeader(bean);
String[] header = new String[numColumns];
BeanField beanField;
for (int i = 0; i < numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null || beanField.getField().getDeclaredAnnotationsByType(
CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
And the model looks like this:
#CsvBindByName(column = "id")
#CsvBindByPosition(position = 0)
private Long id;
#CsvBindByName(column = "name")
#CsvBindByPosition(position = 1)
private String name;
And my generation helper looks something like this:
public static <T extends AbstractCsv> String createCsv(List<T> data, Class<T> beanClazz) {
CustomMappingStrategy<T> mappingStrategy = new CustomMappingStrategy<T>();
mappingStrategy.setType(beanClazz);
StringWriter writer = new StringWriter();
String csv = "";
try {
StatefulBeanToCsv sbc = new StatefulBeanToCsvBuilder(writer)
.withSeparator(';')
.withMappingStrategy(mappingStrategy)
.build();
sbc.write(data);
csv = writer.toString();
} catch (CsvRequiredFieldEmptyException e) {
// TODO add some logging...
} catch (CsvDataTypeMismatchException e) {
// TODO add some logging...
} finally {
try {
writer.close();
} catch (IOException e) {
}
}
return csv;
}
The following works for me to map a POJO to a CSV file with custom column positioning and custom column headers (tested with opencsv-5.0) :
public class CustomBeanToCSVMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] headersAsPerFieldName = getFieldMap().generateHeader(bean); // header name based on field name
String[] header = new String[headersAsPerFieldName.length];
for (int i = 0; i <= headersAsPerFieldName.length - 1; i++) {
BeanField beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField); // header name based on #CsvBindByName annotation
if (columnHeaderName.isEmpty()) // No #CsvBindByName is present
columnHeaderName = headersAsPerFieldName[i]; // defaults to header name based on field name
header[i] = columnHeaderName;
}
headerIndex.initializeHeaderIndex(header);
return header;
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null || beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
Pojo
Column Positioning in the generated CSV file:
The column positioing in the generated CSV file will be as per the annotation #CsvBindByPosition
Header name in the generated CSV file:
If the field has #CsvBindByName, the generated header will be as per the annonation
If the field doesn't have #CsvBindByName, then the generated header will be as per the field name
#Getter #Setter #ToString
public class Pojo {
#CsvBindByName(column="Voucher Series") // header: "Voucher Series"
#CsvBindByPosition(position=0)
private String voucherSeries;
#CsvBindByPosition(position=1) // header: "salePurchaseType"
private String salePurchaseType;
}
Using the above Custom Mapping Strategy:
CustomBeanToCSVMappingStrategy<Pojo> mappingStrategy = new CustomBeanToCSVMappingStrategy<>();
mappingStrategy.setType(Pojo.class);
StatefulBeanToCsv<Pojo> beanToCsv = new StatefulBeanToCsvBuilder<Pojo>(writer)
.withSeparator(CSVWriter.DEFAULT_SEPARATOR)
.withMappingStrategy(mappingStrategy)
.build();
beanToCsv.write(pojoList);
thanks for this thread, it has been really useful for me... I've enhanced a little bit the provided solution in order to accept also POJO where some fields are not annotated (not meant to be read/written):
public class ColumnAndNameMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
super.setColumnMapping(new String[ getAnnotatedFields(bean)]);
final int numColumns = getAnnotatedFields(bean);
final int totalFieldNum = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader(bean);
}
String[] header = new String[numColumns];
BeanField<T> beanField;
for (int i = 0; i <= totalFieldNum; i++) {
beanField = findField(i);
if (isFieldAnnotated(beanField.getField())) {
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
}
return header;
}
private int getAnnotatedFields(T bean) {
return (int) Arrays.stream(FieldUtils.getAllFields(bean.getClass()))
.filter(this::isFieldAnnotated)
.count();
}
private boolean isFieldAnnotated(Field f) {
return f.isAnnotationPresent(CsvBindByName.class) || f.isAnnotationPresent(CsvCustomBindByName.class);
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null) {
return StringUtils.EMPTY;
}
Field field = beanField.getField();
if (field.getDeclaredAnnotationsByType(CsvBindByName.class).length != 0) {
final CsvBindByName bindByNameAnnotation = field.getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
if (field.getDeclaredAnnotationsByType(CsvCustomBindByName.class).length != 0) {
final CsvCustomBindByName bindByNameAnnotation = field.getDeclaredAnnotationsByType(CsvCustomBindByName.class)[0];
return bindByNameAnnotation.column();
}
return StringUtils.EMPTY;
}
}
If you're only interested in sorting the CSV columns based on the order in which member variables appear in your model class (CsvRow row in this example), then you can use a Comparator implementation to solve this in a rather simple manner. Here's an example that does this in Kotlin:
class ByMemberOrderCsvComparator : Comparator<String> {
private val memberOrder by lazy {
FieldUtils.getAllFields(CsvRow::class.java)
.map { it.getDeclaredAnnotation(CsvBindByName::class.java) }
.map { it?.column ?: "" }
.map { it.toUpperCase(Locale.US) } // OpenCSV UpperCases all headers, so we do this to match
}
override fun compare(field1: String?, field2: String?): Int {
return memberOrder.indexOf(field1) - memberOrder.indexOf(field2)
}
}
This Comparator does the following:
Fetches each member variable field in our data class (CsvRow)
Finds all the ones with the #CsvBindByName annotation (in the order you specified them in the CsvRow model)
Upper cases each to match the default OpenCsv implementation
Next, apply this Comparator to your MappingStrategy, so it'll sort based off the specified order:
val mappingStrategy = HeaderColumnNameMappingStrategy<OrderSummaryCsvRow>()
mappingStrategy.setColumnOrderOnWrite(ByMemberOrderCsvComparator())
mappingStrategy.type = CsvRow::class.java
mappingStrategy.setErrorLocale(Locale.US)
val csvWriter = StatefulBeanToCsvBuilder<OrderSummaryCsvRow>(writer)
.withMappingStrategy(mappingStrategy)
.build()
For reference, here's an example CsvRow class (you'll want to replace this with your own model for your needs):
data class CsvRow(
#CsvBindByName(column = "Column 1")
val column1: String,
#CsvBindByName(column = "Column 2")
val column2: String,
#CsvBindByName(column = "Column 3")
val column3: String,
// Other columns here ...
)
Which would produce a CSV as follows:
"COLUMN 1","COLUMN 2","COLUMN 3",...
"value 1a","value 2a","value 3a",...
"value 1b","value 2b","value 3b",...
The benefit of this approach is that it removes the need to hard-code any of your column names, which should greatly simplify things if you ever need to add/remove columns.
It is a solution for version greater than 4.3:
public class MappingBean {
#CsvBindByName(column = "column_a")
private String columnA;
#CsvBindByName(column = "column_b")
private String columnB;
#CsvBindByName(column = "column_c")
private String columnC;
// getters and setters
}
And use it as example:
import org.apache.commons.collections4.comparators.FixedOrderComparator;
...
var mappingStrategy = new HeaderColumnNameMappingStrategy<MappingBean>();
mappingStrategy.setType(MappingBean.class);
mappingStrategy.setColumnOrderOnWrite(new FixedOrderComparator<>("COLUMN_C", "COLUMN_B", "COLUMN_A"));
var sbc = new StatefulBeanToCsvBuilder<MappingBean>(writer)
.withMappingStrategy(mappingStrategy)
.build();
Result:
column_c | column_b | column_a
The following solution works with opencsv 5.0.
First, you need to inherit ColumnPositionMappingStrategy class and override generateHeader method to create your custom header for utilizing both CsvBindByName and CsvBindByPosition annotations as shown below.
import com.opencsv.bean.BeanField;
import com.opencsv.bean.ColumnPositionMappingStrategy;
import com.opencsv.bean.CsvBindByName;
import com.opencsv.exceptions.CsvRequiredFieldEmptyException;
/**
* #param <T>
*/
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
/*
* (non-Javadoc)
*
* #see com.opencsv.bean.ColumnPositionMappingStrategy#generateHeader(java.lang.
* Object)
*/
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
final int numColumns = getFieldMap().values().size();
if (numColumns == -1) {
return super.generateHeader(bean);
}
String[] header = new String[numColumns];
super.setColumnMapping(header);
BeanField<T, Integer> beanField;
for (int i = 0; i < numColumns; i++) {
beanField = findField(i);
String columnHeaderName = beanField.getField().getDeclaredAnnotation(CsvBindByName.class).column();
header[i] = columnHeaderName;
}
return header;
}
}
The next step is to use this mapping strategy while writing a bean to CSV as below.
CustomMappingStrategy<ScanReport> strategy = new CustomMappingStrategy<>();
strategy.setType(ScanReport.class);
// Write a bean to csv file.
StatefulBeanToCsv<ScanReport> beanToCsv = new StatefulBeanToCsvBuilder<ScanReport>(writer)
.withMappingStrategy(strategy).build();
beanToCsv.write(beanList);
I've improved on previous answers by removing all references to deprecated APIs while using the latest release of opencsv (4.6).
A Generic Kotlin Solution
/**
* Custom OpenCSV [ColumnPositionMappingStrategy] that allows for a header line to be generated from a target CSV
* bean model class using the following annotations when present:
* * [CsvBindByName]
* * [CsvCustomBindByName]
*/
class CustomMappingStrategy<T>(private val beanType: Class<T>) : ColumnPositionMappingStrategy<T>() {
init {
setType(beanType)
setColumnMapping(*getAnnotatedFields().map { it.extractHeaderName() }.toTypedArray())
}
override fun generateHeader(bean: T): Array<String> = columnMapping
private fun getAnnotatedFields() = beanType.declaredFields.filter { it.isAnnotatedByName() }.toList()
private fun Field.isAnnotatedByName() = isAnnotationPresent(CsvBindByName::class.java) || isAnnotationPresent(CsvCustomBindByName::class.java)
private fun Field.extractHeaderName() =
getAnnotation(CsvBindByName::class.java)?.column ?: getAnnotation(CsvCustomBindByName::class.java)?.column ?: EMPTY
}
Then use it as follows:
private fun csvBuilder(writer: Writer) =
StatefulBeanToCsvBuilder<MappingsBean>(writer)
.withSeparator(ICSVWriter.DEFAULT_SEPARATOR)
.withMappingStrategy(CustomMappingStrategy(MappingsBean::class.java))
.withApplyQuotesToAll(false)
.build()
// Kotlin try-with-resources construct
PrintWriter(File("$reportOutputDir/$REPORT_FILENAME")).use { writer ->
csvBuilder(writer).write(makeFinalMappingBeanList())
}
and for completeness, here's the CSV bean as a Kotlin data class:
data class MappingsBean(
#field:CsvBindByName(column = "TradeID")
#field:CsvBindByPosition(position = 0, required = true)
private val tradeId: String,
#field:CsvBindByName(column = "GWML GUID", required = true)
#field:CsvBindByPosition(position = 1)
private val gwmlGUID: String,
#field:CsvBindByName(column = "MXML GUID", required = true)
#field:CsvBindByPosition(position = 2)
private val mxmlGUID: String,
#field:CsvBindByName(column = "GWML File")
#field:CsvBindByPosition(position = 3)
private val gwmlFile: String? = null,
#field:CsvBindByName(column = "MxML File")
#field:CsvBindByPosition(position = 4)
private val mxmlFile: String? = null,
#field:CsvBindByName(column = "MxML Counterparty")
#field:CsvBindByPosition(position = 5)
private val mxmlCounterParty: String? = null,
#field:CsvBindByName(column = "GWML Counterparty")
#field:CsvBindByPosition(position = 6)
private val gwmlCounterParty: String? = null
)
I think the intended and most flexible way of handling the order of the header columns is to inject a comparator by HeaderColumnNameMappinStrategy.setColumnOrderOnWrite().
For me the most intuitive way was to write the CSV columns in the same order as they are specified in the CsvBean, but you can also adjust the Comparator to make use of your own annotations where you specify the order. Don´t forget to rename the Comparator class then ;)
Integration:
HeaderColumnNameMappingStrategy<MyCsvBean> mappingStrategy = new HeaderColumnNameMappingStrategy<>();
mappingStrategy.setType(MyCsvBean.class);
mappingStrategy.setColumnOrderOnWrite(new ClassFieldOrderComparator(MyCsvBean.class));
Comparator:
private class ClassFieldOrderComparator implements Comparator<String> {
List<String> fieldNamesInOrderWithinClass;
public ClassFieldOrderComparator(Class<?> clazz) {
fieldNamesInOrderWithinClass = Arrays.stream(clazz.getDeclaredFields())
.filter(field -> field.getAnnotation(CsvBindByName.class) != null)
// Handle order by your custom annotation here
//.sorted((field1, field2) -> {
// int field1Order = field1.getAnnotation(YourCustomOrderAnnotation.class).getOrder();
// int field2Order = field2.getAnnotation(YourCustomOrderAnnotation.class).getOrder();
// return Integer.compare(field1Order, field2Order);
//})
.map(field -> field.getName().toUpperCase())
.collect(Collectors.toList());
}
#Override
public int compare(String o1, String o2) {
int fieldIndexo1 = fieldNamesInOrderWithinClass.indexOf(o1);
int fieldIndexo2 = fieldNamesInOrderWithinClass.indexOf(o2);
return Integer.compare(fieldIndexo1, fieldIndexo2);
}
}
This can be done using a HeaderColumnNameMappingStrategy along with a custom Comparator as well.
Which is recommended by the official doc http://opencsv.sourceforge.net/#mapping_strategies
File reportFile = new File(reportOutputDir + "/" + REPORT_FILENAME);
Writer writer = new PrintWriter(reportFile);
final List<String> order = List.of("TradeID", "GWML GUID", "MXML GUID", "GWML File", "MxML File", "MxML Counterparty", "GWML Counterparty");
final FixedOrderComparator comparator = new FixedOrderComparator(order);
HeaderColumnNameMappingStrategy<MappingsBean> strategy = new HeaderColumnNameMappingStrategy<>();
strategy.setType(MappingsBean.class);
strategy.setColumnOrderOnWrite(comparator);
StatefulBeanToCsv<MappingsBean> beanToCsv = new
StatefulBeanToCsvBuilder(writer)
.withMappingStrategy(strategy)
.build();
beanToCsv.write(makeFinalMappingBeanList());
writer.close();
CustomMappingStrategy for generic class.
public class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
super.setColumnMapping(new String[ FieldUtils.getAllFields(bean.getClass()).length]);
final int numColumns = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader(bean);
}
String[] header = new String[numColumns + 1];
BeanField<T> beanField;
for (int i = 0; i <= numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField<T> beanField) {
if (beanField == null || beanField.getField() == null
|| beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField()
.getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
POJO Class
public class Customer{
#CsvBindByPosition(position=1)
#CsvBindByName(column="CUSTOMER", required = true)
private String customer;
}
Client Class
List<T> data = getEmployeeRecord();
CustomMappingStrategy custom = new CustomMappingStrategy();
custom.setType(Employee.class);
StatefulBeanToCsv<T> writer = new StatefulBeanToCsvBuilder<T>(response.getWriter())
.withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.withSeparator('|')
.withOrderedResults(false)
.withMappingStrategy(custom)
.build();
writer.write(reportData);
There is another version for 5.2 version because I have a problem with #CsvCustomBindByName annotation when I tried answers above.
I defined custom annotation :
#Target(ElementType.FIELD)
#Inherited
#Retention(RetentionPolicy.RUNTIME)
public #interface CsvPosition {
int position();
}
and custom mapping strategy
public class CustomMappingStrategy<T> extends HeaderColumnNameMappingStrategy<T> {
private final Field[] fields;
public CustomMappingStrategy(Class<T> clazz) {
fields = clazz.getDeclaredFields();
Arrays.sort(fields, (f1, f2) -> {
CsvPosition position1 = f1.getAnnotation(CsvPosition.class);
CsvPosition position2 = f2.getAnnotation(CsvPosition.class);
return Integer.compare(position1.position(), position2.position());
});
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] header = new String[fields.length];
for (Field f : fields) {
CsvPosition position = f.getAnnotation(CsvPosition.class);
header[position.position() - 1] = getName(f);
}
headerIndex.initializeHeaderIndex(header);
return header;
}
private String getName(Field f) {
CsvBindByName csvBindByName = f.getAnnotation(CsvBindByName.class);
CsvCustomBindByName csvCustomBindByName = f.getAnnotation(CsvCustomBindByName.class);
return csvCustomBindByName != null
? csvCustomBindByName.column() == null || csvCustomBindByName.column().isEmpty() ? f.getName() : csvCustomBindByName.column()
: csvBindByName.column() == null || csvBindByName.column().isEmpty() ? f.getName() : csvBindByName.column();
}
}
My POJO beans are annotated like this
public class Record {
#CsvBindByName(required = true)
#CsvPosition(position = 1)
Long id;
#CsvCustomBindByName(required = true, converter = BoolanCSVField.class)
#CsvPosition(position = 2)
Boolean deleted;
...
}
and final code for writer :
CustomMappingStrategy<Record> mappingStrategy = new CustomMappingStrategy<>(Record.class);
mappingStrategy.setType(Record.class);
StatefulBeanToCsv beanToCsv = new StatefulBeanToCsvBuilder(writer)
.withApplyQuotesToAll(false)
.withOrderedResults(true)
.withMappingStrategy(mappingStrategy)
.build();
I hope it will helpful for someone
Here is the code to add support for #CsvBindByPosition based ordering to default HeaderColumnNameMappingStrategy. Tested for latest version 5.2
Approach is to store 2 map. First headerPositionMap to store the position element so same can used to setColumnOrderOnWrite , second columnMap from which we can lookup actual column name rather than capitalized one
public class HeaderColumnNameWithPositionMappingStrategy<T> extends HeaderColumnNameMappingStrategy<T> {
protected Map<String, String> columnMap;
#Override
public void setType(Class<? extends T> type) throws CsvBadConverterException {
super.setType(type);
columnMap = new HashMap<>(this.getFieldMap().values().size());
Map<String, Integer> headerPositionMap = new HashMap<>(this.getFieldMap().values().size());
for (Field field : type.getDeclaredFields()) {
if (field.isAnnotationPresent(CsvBindByPosition.class) && field.isAnnotationPresent(CsvBindByName.class)) {
int position = field.getAnnotation(CsvBindByPosition.class).position();
String colName = "".equals(field.getAnnotation(CsvBindByName.class).column()) ? field.getName() : field.getAnnotation(CsvBindByName.class).column();
headerPositionMap.put(colName.toUpperCase().trim(), position);
columnMap.put(colName.toUpperCase().trim(), colName);
}
}
super.setColumnOrderOnWrite((String o1, String o2) -> {
if (!headerPositionMap.containsKey(o1) || !headerPositionMap.containsKey(o2)) {
return 0;
}
return headerPositionMap.get(o1) - headerPositionMap.get(o2);
});
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] headersRaw = super.generateHeader(bean);
return Arrays.stream(headersRaw).map(h -> columnMap.get(h)).toArray(String[]::new);
}
}
If you dont have getDeclaredAnnotationsByType method, but need the name of your original field name:
beanField.getField().getName()
public class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader() {
final int numColumns = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader();
}
header = new String[numColumns + 1];
BeanField beanField;
for (int i = 0; i <= numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null || beanField.getField().getDeclaredAnnotations().length == 0) {
return StringUtils.EMPTY;
}
return beanField.getField().getName();
}
}
Try something like below:
private static class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
String[] header;
public CustomMappingStrategy(String[] cols) {
header = cols;
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
return header;
}
}
Then use it as follows:
String[] columns = new String[]{"Name", "Age", "Company", "Salary"};
CustomMappingStrategy<Employee> mappingStrategy = new CustomMappingStrategy<Employee>(columns);
Where columns are columns of your bean and Employee is your bean
Great thread, I don't have any annotations in my pojo and this is how I did based on all the previous answers. Hope it helps others.
OpenCsv Version: 5.0
List readVendors = getFromMethod();
String[] fields= {"id","recordNumber","finVendorIdTb","finVenTechIdTb","finShortNameTb","finVenName1Tb","finVenName2Tb"};
String[] csvHeader= {"Id#","Shiv Record Number","Shiv Vendor Id","Shiva Tech Id#","finShortNameTb","finVenName1Tb","finVenName2Tb"};
CustomMappingStrategy<FinVendor> mappingStrategy = new CustomMappingStrategy(csvHeader);//csvHeader as per custom header irrespective of pojo field name
mappingStrategy.setType(FinVendor.class);
mappingStrategy.setColumnMapping(fields);//pojo mapping fields
StatefulBeanToCsv<FinVendor> beanToCsv = new StatefulBeanToCsvBuilder<FinVendor>(writer).withQuotechar(CSVWriter.NO_QUOTE_CHARACTER).withMappingStrategy(mappingStrategy).build();
beanToCsv.write(readVendors);
//custom mapping class as mentioned in the thread by many users
private static class CustomMappingStrategy extends ColumnPositionMappingStrategy {
String[] header;
public CustomMappingStrategy(String[] cols) {
header = cols;
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
super.generateHeader(bean);
return header;
}
}
Output:
Id# Shiv Record Number Shiv Vendor Id Fin Tech Id# finShortNameTb finVenName1Tb finVenName2Tb finVenDefaultLocTb
1 VEN00053 678 33316025986 THE ssOHIO S_2 THE UNIVERSITY CHK Test
2 VEN02277 1217 3044374205 Fe3 MECHA_1 FR3INC EFT-1
3 VEN03118 1310 30234484121 PE333PECTUS_1 PER332CTUS AR EFT-1 Test
The sebast26's first solution worked for me but for opencsv version 5.2 it requires a little change in the CustomMappingStrategy class:
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
private static final String[] HEADER = new String[]{"TradeID", "GWML GUID", "MXML GUID", "GWML File", "MxML File", "MxML Counterparty", "GWML Counterparty"};
#Override
public String[] generateHeader() {
super.generateHeader(bean); // without this the file contains ONLY headers
return HEADER;
}
}
In case you need this to preserve column ordering from the original CSV: use a HeaderColumnNameMappingStrategy for reading, then use the same strategy for writing. "Same" in this case meaning not just the same class, but really the same object.
From the javadoc of StatefulBeanToCsvBuilder.withMappingStrategy:
It is perfectly legitimate to read a CSV source, take the mapping
strategy from the read operation, and pass it in to this method for a
write operation. This conserves some processing time, but, more
importantly, preserves header ordering.
This way you will get a CSV including headers, with columns in the same order as the original CSV.
Worked for me using OpenCSV 5.4.
It took me time also but I found the solution.
Add these annotations to your POJO: #CsvBindByName, #CsvBindByPosition with the right name and position of each object.
My POJO:
#JsonIgnoreProperties(ignoreUnknown = true)
#Getter
#Setter
public class CsvReport {
#CsvBindByName(column = "Campaign")
#CsvBindByPosition(position = 0)
private String program;
#CsvBindByName(column = "Report")
#CsvBindByPosition(position = 1)
private String report;
#CsvBindByName(column = "Metric Label")
#CsvBindByPosition(position = 2)
private String metric;
}
And add this code (my POJO called CsvReport):
ColumnPositionMappingStrategy<CsvReport> mappingStrategy = new ColumnPositionMappingStrategyBuilder<CsvReport>().build();
mappingStrategy.setType(CsvReport.class);
//add your headers in the sort you want to be in the file:
String[] columns = new String[] { "Campaign", "Report", "Metric Label"};
mappingStrategy.setColumnMapping(columns);
//Write your headers first in your chosen Writer:
Writer responseWriter = response.getWriter();
responseWriter.append(String.join(",", columns)).append("\n");
// Configure the CSV writer builder
StatefulBeanToCsv<CsvReport> writer = new StatefulBeanToCsvBuilder<CsvReport>(responseWriter)
.withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.withSeparator(CSVWriter.DEFAULT_SEPARATOR)
.withOrderedResults(true) //I needed to keep the order, if you don't put false.
.withMappingStrategy(mappingStrategy)
.build();
String fileName = "your file name";
response.setHeader(HttpHeaders.CONTENT_DISPOSITION,String.format("attachment; filename=%s", fileName));
writer.write(csvReports);
This will create a new CSV file with your printed headers and ordered fields.

OpenCSV: How to create CSV file from POJO with custom column headers and custom column positions?

I have created a MappingsBean class where all the columns of the CSV file are specified. Next I parse XML files and create a list of mappingbeans. Then I write that data into CSV file as report.
I am using following annotations:
public class MappingsBean {
#CsvBindByName(column = "TradeID")
#CsvBindByPosition(position = 0)
private String tradeId;
#CsvBindByName(column = "GWML GUID", required = true)
#CsvBindByPosition(position = 1)
private String gwmlGUID;
#CsvBindByName(column = "MXML GUID", required = true)
#CsvBindByPosition(position = 2)
private String mxmlGUID;
#CsvBindByName(column = "GWML File")
#CsvBindByPosition(position = 3)
private String gwmlFile;
#CsvBindByName(column = "MxML File")
#CsvBindByPosition(position = 4)
private String mxmlFile;
#CsvBindByName(column = "MxML Counterparty")
#CsvBindByPosition(position = 5)
private String mxmlCounterParty;
#CsvBindByName(column = "GWML Counterparty")
#CsvBindByPosition(position = 6)
private String gwmlCounterParty;
}
And then I use StatefulBeanToCsv class to write into CSV file:
File reportFile = new File(reportOutputDir + "/" + REPORT_FILENAME);
Writer writer = new PrintWriter(reportFile);
StatefulBeanToCsv<MappingsBean> beanToCsv = new
StatefulBeanToCsvBuilder(writer).build();
beanToCsv.write(makeFinalMappingBeanList());
writer.close();
The problem with this approach is that if I use #CsvBindByPosition(position = 0) to control
position then I am not able to generate column names. If I use #CsvBindByName(column = "TradeID") then I am not able to set position of the columns.
Is there a way where I can use both annotations, so that I can create CSV files with column headers and also control column position?
Regards,
Vikram Pathania
I've had similar problem. AFAIK there is no build-in functionality in OpenCSV that will allow to write bean to CSV with custom column names and ordering.
There are two main MappingStrategyies that are available in OpenCSV out of the box:
HeaderColumnNameMappingStrategy: that allows to map CVS file columns to bean fields based on custom name; when writing bean to CSV this allows to change column header name but we have no control on column order
ColumnPositionMappingStrategy: that allows to map CSV file columns to bean fields based on column ordering; when writing bean to CSV we can control column order but we get an empty header (implementation returns new String[0] as a header)
The only way I found to achieve both custom column names and ordering is to write your custom MappingStrategy.
First solution: fast and easy but hardcoded
Create custom MappingStrategy:
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
private static final String[] HEADER = new String[]{"TradeID", "GWML GUID", "MXML GUID", "GWML File", "MxML File", "MxML Counterparty", "GWML Counterparty"};
#Override
public String[] generateHeader() {
return HEADER;
}
}
And use it in StatefulBeanToCsvBuilder:
final CustomMappingStrategy<MappingsBean> mappingStrategy = new CustomMappingStrategy<>();
mappingStrategy.setType(MappingsBean.class);
final StatefulBeanToCsv<MappingsBean> beanToCsv = new StatefulBeanToCsvBuilder<MappingsBean>(writer)
.withMappingStrategy(mappingStrategy)
.build();
beanToCsv.write(makeFinalMappingBeanList());
writer.close()
In MappingsBean class we left CsvBindByPosition annotations - to control ordering (in this solution CsvBindByName annotations are not needed). Thanks to custom mapping strategy the header column names are included in resulting CSV file.
The downside of this solution is that when we change column ordering through CsvBindByPosition annotation we have to manually change also HEADER constant in our custom mapping strategy.
Second solution: more flexible
The first solution works, but it was not good for me. Based on build-in implementations of MappingStrategy I came up with yet another implementation:
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader() {
final int numColumns = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader();
}
header = new String[numColumns + 1];
BeanField beanField;
for (int i = 0; i <= numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null || beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
You can use this custom strategy in StatefulBeanToCsvBuilder exactly this same as in the first solution (remember to invoke mappingStrategy.setType(MappingsBean.class);, otherwise this solution will not work).
Currently our MappingsBean has to contain both CsvBindByName and CsvBindByPosition annotations. The first to give header column name and the second to create ordering of columns in the output CSV header. Now if we change (using annotations) either column name or ordering in MappingsBean class - that change will be reflected in output CSV file.
Corrected above answer to match with newer version.
package csvpojo;
import org.apache.commons.lang3.StringUtils;
import com.opencsv.bean.BeanField;
import com.opencsv.bean.ColumnPositionMappingStrategy;
import com.opencsv.bean.CsvBindByName;
import com.opencsv.exceptions.CsvRequiredFieldEmptyException;
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
super.setColumnMapping(new String[ FieldUtils.getAllFields(bean.getClass()).length]);
final int numColumns = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader(bean);
}
String[] header = new String[numColumns + 1];
BeanField<T> beanField;
for (int i = 0; i <= numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField<T> beanField) {
if (beanField == null || beanField.getField() == null
|| beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField()
.getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
Then call this to generate CSV. I have used Visitors as my POJO to populate, update wherever necessary.
CustomMappingStrategy<Visitors> mappingStrategy = new CustomMappingStrategy<>();
mappingStrategy.setType(Visitors.class);
// writing sample
List<Visitors> beans2 = new ArrayList<Visitors>();
Visitors v = new Visitors();
v.set_1_firstName(" test1");
v.set_2_lastName("lastname1");
v.set_3_visitsToWebsite("876");
beans2.add(v);
v = new Visitors();
v.set_1_firstName(" firstsample2");
v.set_2_lastName("lastname2");
v.set_3_visitsToWebsite("777");
beans2.add(v);
Writer writer = new FileWriter("G://output.csv");
StatefulBeanToCsv<Visitors> beanToCsv = new StatefulBeanToCsvBuilder<Visitors>(writer)
.withMappingStrategy(mappingStrategy).withSeparator(',').withApplyQuotesToAll(false).build();
beanToCsv.write(beans2);
writer.close();
My bean annotations looks like this
#CsvBindByName (column = "First Name", required = true)
#CsvBindByPosition(position=1)
private String firstName;
#CsvBindByName (column = "Last Name", required = true)
#CsvBindByPosition(position=0)
private String lastName;
I wanted to achieve bi-directional import/export - to be able to import generated CSV back to POJO and visa versa.
I was not able to use #CsvBindByPosition for this, because in this case - ColumnPositionMappingStrategy was selected automatically. Per documents: this strategy requires that the file does NOT have a header.
What I've used to achieve the goal:
HeaderColumnNameMappingStrategy
mappingStrategy.setColumnOrderOnWrite(Comparator<String> writeOrder)
CsvUtils to read/write csv
import com.opencsv.CSVWriter;
import com.opencsv.bean.*;
import org.springframework.web.multipart.MultipartFile;
import java.io.*;
import java.util.List;
public class CsvUtils {
private CsvUtils() {
}
public static <T> String convertToCsv(List<T> entitiesList, MappingStrategy<T> mappingStrategy) throws Exception {
try (Writer writer = new StringWriter()) {
StatefulBeanToCsv<T> beanToCsv = new StatefulBeanToCsvBuilder<T>(writer)
.withMappingStrategy(mappingStrategy)
.withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.build();
beanToCsv.write(entitiesList);
return writer.toString();
}
}
#SuppressWarnings("unchecked")
public static <T> List<T> convertFromCsv(MultipartFile file, Class clazz) throws IOException {
try (Reader reader = new BufferedReader(new InputStreamReader(file.getInputStream()))) {
CsvToBean<T> csvToBean = new CsvToBeanBuilder<T>(reader).withType(clazz).build();
return csvToBean.parse();
}
}
}
POJO for import/export
public class LocalBusinessTrainingPairDTO {
//this is used for CSV columns ordering on exporting LocalBusinessTrainingPairs
public static final String[] FIELDS_ORDER = {"leftId", "leftName", "rightId", "rightName"};
#CsvBindByName(column = "leftId")
private int leftId;
#CsvBindByName(column = "leftName")
private String leftName;
#CsvBindByName(column = "rightId")
private int rightId;
#CsvBindByName(column = "rightName")
private String rightName;
// getters/setters omitted, do not forget to add them
}
Custom comparator for predefined String ordering:
public class OrderedComparatorIgnoringCase implements Comparator<String> {
private List<String> predefinedOrder;
public OrderedComparatorIgnoringCase(String[] predefinedOrder) {
this.predefinedOrder = new ArrayList<>();
for (String item : predefinedOrder) {
this.predefinedOrder.add(item.toLowerCase());
}
}
#Override
public int compare(String o1, String o2) {
return predefinedOrder.indexOf(o1.toLowerCase()) - predefinedOrder.indexOf(o2.toLowerCase());
}
}
Ordered writing for POJO (answer to initial question)
public static void main(String[] args) throws Exception {
List<LocalBusinessTrainingPairDTO> localBusinessTrainingPairsDTO = new ArrayList<>();
LocalBusinessTrainingPairDTO localBusinessTrainingPairDTO = new LocalBusinessTrainingPairDTO();
localBusinessTrainingPairDTO.setLeftId(1);
localBusinessTrainingPairDTO.setLeftName("leftName");
localBusinessTrainingPairDTO.setRightId(2);
localBusinessTrainingPairDTO.setRightName("rightName");
localBusinessTrainingPairsDTO.add(localBusinessTrainingPairDTO);
//Creating HeaderColumnNameMappingStrategy
HeaderColumnNameMappingStrategy<LocalBusinessTrainingPairDTO> mappingStrategy = new HeaderColumnNameMappingStrategy<>();
mappingStrategy.setType(LocalBusinessTrainingPairDTO.class);
//Setting predefined order using String comparator
mappingStrategy.setColumnOrderOnWrite(new OrderedComparatorIgnoringCase(LocalBusinessTrainingPairDTO.FIELDS_ORDER));
String csv = convertToCsv(localBusinessTrainingPairsDTO, mappingStrategy);
System.out.println(csv);
}
Read exported CSV back to POJO (addition to original answer)
Important: CSV can be unordered, as we are still using binding by name:
public static void main(String[] args) throws Exception {
//omitted code from writing
String csv = convertToCsv(localBusinessTrainingPairsDTO, mappingStrategy);
//Exported CSV should be compatible for further import
File temp = File.createTempFile("tempTrainingPairs", ".csv");
temp.deleteOnExit();
BufferedWriter bw = new BufferedWriter(new FileWriter(temp));
bw.write(csv);
bw.close();
MultipartFile multipartFile = new MockMultipartFile("tempTrainingPairs.csv", new FileInputStream(temp));
List<LocalBusinessTrainingPairDTO> localBusinessTrainingPairDTOList = convertFromCsv(multipartFile, LocalBusinessTrainingPairDTO.class);
}
To conclude:
We can read CSV to POJO, regardless of column order - because we are
using #CsvBindByName
We can control columns order on write using
custom comparator
In the latest version the solution of #Sebast26 does no longer work. However the basic is still very good. Here is a working solution with v5.0
import com.opencsv.bean.BeanField;
import com.opencsv.bean.ColumnPositionMappingStrategy;
import com.opencsv.bean.CsvBindByName;
import com.opencsv.exceptions.CsvRequiredFieldEmptyException;
import org.apache.commons.lang3.StringUtils;
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
final int numColumns = getFieldMap().values().size();
super.generateHeader(bean);
String[] header = new String[numColumns];
BeanField beanField;
for (int i = 0; i < numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null || beanField.getField().getDeclaredAnnotationsByType(
CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
And the model looks like this:
#CsvBindByName(column = "id")
#CsvBindByPosition(position = 0)
private Long id;
#CsvBindByName(column = "name")
#CsvBindByPosition(position = 1)
private String name;
And my generation helper looks something like this:
public static <T extends AbstractCsv> String createCsv(List<T> data, Class<T> beanClazz) {
CustomMappingStrategy<T> mappingStrategy = new CustomMappingStrategy<T>();
mappingStrategy.setType(beanClazz);
StringWriter writer = new StringWriter();
String csv = "";
try {
StatefulBeanToCsv sbc = new StatefulBeanToCsvBuilder(writer)
.withSeparator(';')
.withMappingStrategy(mappingStrategy)
.build();
sbc.write(data);
csv = writer.toString();
} catch (CsvRequiredFieldEmptyException e) {
// TODO add some logging...
} catch (CsvDataTypeMismatchException e) {
// TODO add some logging...
} finally {
try {
writer.close();
} catch (IOException e) {
}
}
return csv;
}
The following works for me to map a POJO to a CSV file with custom column positioning and custom column headers (tested with opencsv-5.0) :
public class CustomBeanToCSVMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] headersAsPerFieldName = getFieldMap().generateHeader(bean); // header name based on field name
String[] header = new String[headersAsPerFieldName.length];
for (int i = 0; i <= headersAsPerFieldName.length - 1; i++) {
BeanField beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField); // header name based on #CsvBindByName annotation
if (columnHeaderName.isEmpty()) // No #CsvBindByName is present
columnHeaderName = headersAsPerFieldName[i]; // defaults to header name based on field name
header[i] = columnHeaderName;
}
headerIndex.initializeHeaderIndex(header);
return header;
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null || beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
Pojo
Column Positioning in the generated CSV file:
The column positioing in the generated CSV file will be as per the annotation #CsvBindByPosition
Header name in the generated CSV file:
If the field has #CsvBindByName, the generated header will be as per the annonation
If the field doesn't have #CsvBindByName, then the generated header will be as per the field name
#Getter #Setter #ToString
public class Pojo {
#CsvBindByName(column="Voucher Series") // header: "Voucher Series"
#CsvBindByPosition(position=0)
private String voucherSeries;
#CsvBindByPosition(position=1) // header: "salePurchaseType"
private String salePurchaseType;
}
Using the above Custom Mapping Strategy:
CustomBeanToCSVMappingStrategy<Pojo> mappingStrategy = new CustomBeanToCSVMappingStrategy<>();
mappingStrategy.setType(Pojo.class);
StatefulBeanToCsv<Pojo> beanToCsv = new StatefulBeanToCsvBuilder<Pojo>(writer)
.withSeparator(CSVWriter.DEFAULT_SEPARATOR)
.withMappingStrategy(mappingStrategy)
.build();
beanToCsv.write(pojoList);
thanks for this thread, it has been really useful for me... I've enhanced a little bit the provided solution in order to accept also POJO where some fields are not annotated (not meant to be read/written):
public class ColumnAndNameMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
super.setColumnMapping(new String[ getAnnotatedFields(bean)]);
final int numColumns = getAnnotatedFields(bean);
final int totalFieldNum = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader(bean);
}
String[] header = new String[numColumns];
BeanField<T> beanField;
for (int i = 0; i <= totalFieldNum; i++) {
beanField = findField(i);
if (isFieldAnnotated(beanField.getField())) {
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
}
return header;
}
private int getAnnotatedFields(T bean) {
return (int) Arrays.stream(FieldUtils.getAllFields(bean.getClass()))
.filter(this::isFieldAnnotated)
.count();
}
private boolean isFieldAnnotated(Field f) {
return f.isAnnotationPresent(CsvBindByName.class) || f.isAnnotationPresent(CsvCustomBindByName.class);
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null) {
return StringUtils.EMPTY;
}
Field field = beanField.getField();
if (field.getDeclaredAnnotationsByType(CsvBindByName.class).length != 0) {
final CsvBindByName bindByNameAnnotation = field.getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
if (field.getDeclaredAnnotationsByType(CsvCustomBindByName.class).length != 0) {
final CsvCustomBindByName bindByNameAnnotation = field.getDeclaredAnnotationsByType(CsvCustomBindByName.class)[0];
return bindByNameAnnotation.column();
}
return StringUtils.EMPTY;
}
}
If you're only interested in sorting the CSV columns based on the order in which member variables appear in your model class (CsvRow row in this example), then you can use a Comparator implementation to solve this in a rather simple manner. Here's an example that does this in Kotlin:
class ByMemberOrderCsvComparator : Comparator<String> {
private val memberOrder by lazy {
FieldUtils.getAllFields(CsvRow::class.java)
.map { it.getDeclaredAnnotation(CsvBindByName::class.java) }
.map { it?.column ?: "" }
.map { it.toUpperCase(Locale.US) } // OpenCSV UpperCases all headers, so we do this to match
}
override fun compare(field1: String?, field2: String?): Int {
return memberOrder.indexOf(field1) - memberOrder.indexOf(field2)
}
}
This Comparator does the following:
Fetches each member variable field in our data class (CsvRow)
Finds all the ones with the #CsvBindByName annotation (in the order you specified them in the CsvRow model)
Upper cases each to match the default OpenCsv implementation
Next, apply this Comparator to your MappingStrategy, so it'll sort based off the specified order:
val mappingStrategy = HeaderColumnNameMappingStrategy<OrderSummaryCsvRow>()
mappingStrategy.setColumnOrderOnWrite(ByMemberOrderCsvComparator())
mappingStrategy.type = CsvRow::class.java
mappingStrategy.setErrorLocale(Locale.US)
val csvWriter = StatefulBeanToCsvBuilder<OrderSummaryCsvRow>(writer)
.withMappingStrategy(mappingStrategy)
.build()
For reference, here's an example CsvRow class (you'll want to replace this with your own model for your needs):
data class CsvRow(
#CsvBindByName(column = "Column 1")
val column1: String,
#CsvBindByName(column = "Column 2")
val column2: String,
#CsvBindByName(column = "Column 3")
val column3: String,
// Other columns here ...
)
Which would produce a CSV as follows:
"COLUMN 1","COLUMN 2","COLUMN 3",...
"value 1a","value 2a","value 3a",...
"value 1b","value 2b","value 3b",...
The benefit of this approach is that it removes the need to hard-code any of your column names, which should greatly simplify things if you ever need to add/remove columns.
It is a solution for version greater than 4.3:
public class MappingBean {
#CsvBindByName(column = "column_a")
private String columnA;
#CsvBindByName(column = "column_b")
private String columnB;
#CsvBindByName(column = "column_c")
private String columnC;
// getters and setters
}
And use it as example:
import org.apache.commons.collections4.comparators.FixedOrderComparator;
...
var mappingStrategy = new HeaderColumnNameMappingStrategy<MappingBean>();
mappingStrategy.setType(MappingBean.class);
mappingStrategy.setColumnOrderOnWrite(new FixedOrderComparator<>("COLUMN_C", "COLUMN_B", "COLUMN_A"));
var sbc = new StatefulBeanToCsvBuilder<MappingBean>(writer)
.withMappingStrategy(mappingStrategy)
.build();
Result:
column_c | column_b | column_a
The following solution works with opencsv 5.0.
First, you need to inherit ColumnPositionMappingStrategy class and override generateHeader method to create your custom header for utilizing both CsvBindByName and CsvBindByPosition annotations as shown below.
import com.opencsv.bean.BeanField;
import com.opencsv.bean.ColumnPositionMappingStrategy;
import com.opencsv.bean.CsvBindByName;
import com.opencsv.exceptions.CsvRequiredFieldEmptyException;
/**
* #param <T>
*/
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
/*
* (non-Javadoc)
*
* #see com.opencsv.bean.ColumnPositionMappingStrategy#generateHeader(java.lang.
* Object)
*/
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
final int numColumns = getFieldMap().values().size();
if (numColumns == -1) {
return super.generateHeader(bean);
}
String[] header = new String[numColumns];
super.setColumnMapping(header);
BeanField<T, Integer> beanField;
for (int i = 0; i < numColumns; i++) {
beanField = findField(i);
String columnHeaderName = beanField.getField().getDeclaredAnnotation(CsvBindByName.class).column();
header[i] = columnHeaderName;
}
return header;
}
}
The next step is to use this mapping strategy while writing a bean to CSV as below.
CustomMappingStrategy<ScanReport> strategy = new CustomMappingStrategy<>();
strategy.setType(ScanReport.class);
// Write a bean to csv file.
StatefulBeanToCsv<ScanReport> beanToCsv = new StatefulBeanToCsvBuilder<ScanReport>(writer)
.withMappingStrategy(strategy).build();
beanToCsv.write(beanList);
I've improved on previous answers by removing all references to deprecated APIs while using the latest release of opencsv (4.6).
A Generic Kotlin Solution
/**
* Custom OpenCSV [ColumnPositionMappingStrategy] that allows for a header line to be generated from a target CSV
* bean model class using the following annotations when present:
* * [CsvBindByName]
* * [CsvCustomBindByName]
*/
class CustomMappingStrategy<T>(private val beanType: Class<T>) : ColumnPositionMappingStrategy<T>() {
init {
setType(beanType)
setColumnMapping(*getAnnotatedFields().map { it.extractHeaderName() }.toTypedArray())
}
override fun generateHeader(bean: T): Array<String> = columnMapping
private fun getAnnotatedFields() = beanType.declaredFields.filter { it.isAnnotatedByName() }.toList()
private fun Field.isAnnotatedByName() = isAnnotationPresent(CsvBindByName::class.java) || isAnnotationPresent(CsvCustomBindByName::class.java)
private fun Field.extractHeaderName() =
getAnnotation(CsvBindByName::class.java)?.column ?: getAnnotation(CsvCustomBindByName::class.java)?.column ?: EMPTY
}
Then use it as follows:
private fun csvBuilder(writer: Writer) =
StatefulBeanToCsvBuilder<MappingsBean>(writer)
.withSeparator(ICSVWriter.DEFAULT_SEPARATOR)
.withMappingStrategy(CustomMappingStrategy(MappingsBean::class.java))
.withApplyQuotesToAll(false)
.build()
// Kotlin try-with-resources construct
PrintWriter(File("$reportOutputDir/$REPORT_FILENAME")).use { writer ->
csvBuilder(writer).write(makeFinalMappingBeanList())
}
and for completeness, here's the CSV bean as a Kotlin data class:
data class MappingsBean(
#field:CsvBindByName(column = "TradeID")
#field:CsvBindByPosition(position = 0, required = true)
private val tradeId: String,
#field:CsvBindByName(column = "GWML GUID", required = true)
#field:CsvBindByPosition(position = 1)
private val gwmlGUID: String,
#field:CsvBindByName(column = "MXML GUID", required = true)
#field:CsvBindByPosition(position = 2)
private val mxmlGUID: String,
#field:CsvBindByName(column = "GWML File")
#field:CsvBindByPosition(position = 3)
private val gwmlFile: String? = null,
#field:CsvBindByName(column = "MxML File")
#field:CsvBindByPosition(position = 4)
private val mxmlFile: String? = null,
#field:CsvBindByName(column = "MxML Counterparty")
#field:CsvBindByPosition(position = 5)
private val mxmlCounterParty: String? = null,
#field:CsvBindByName(column = "GWML Counterparty")
#field:CsvBindByPosition(position = 6)
private val gwmlCounterParty: String? = null
)
I think the intended and most flexible way of handling the order of the header columns is to inject a comparator by HeaderColumnNameMappinStrategy.setColumnOrderOnWrite().
For me the most intuitive way was to write the CSV columns in the same order as they are specified in the CsvBean, but you can also adjust the Comparator to make use of your own annotations where you specify the order. Don´t forget to rename the Comparator class then ;)
Integration:
HeaderColumnNameMappingStrategy<MyCsvBean> mappingStrategy = new HeaderColumnNameMappingStrategy<>();
mappingStrategy.setType(MyCsvBean.class);
mappingStrategy.setColumnOrderOnWrite(new ClassFieldOrderComparator(MyCsvBean.class));
Comparator:
private class ClassFieldOrderComparator implements Comparator<String> {
List<String> fieldNamesInOrderWithinClass;
public ClassFieldOrderComparator(Class<?> clazz) {
fieldNamesInOrderWithinClass = Arrays.stream(clazz.getDeclaredFields())
.filter(field -> field.getAnnotation(CsvBindByName.class) != null)
// Handle order by your custom annotation here
//.sorted((field1, field2) -> {
// int field1Order = field1.getAnnotation(YourCustomOrderAnnotation.class).getOrder();
// int field2Order = field2.getAnnotation(YourCustomOrderAnnotation.class).getOrder();
// return Integer.compare(field1Order, field2Order);
//})
.map(field -> field.getName().toUpperCase())
.collect(Collectors.toList());
}
#Override
public int compare(String o1, String o2) {
int fieldIndexo1 = fieldNamesInOrderWithinClass.indexOf(o1);
int fieldIndexo2 = fieldNamesInOrderWithinClass.indexOf(o2);
return Integer.compare(fieldIndexo1, fieldIndexo2);
}
}
This can be done using a HeaderColumnNameMappingStrategy along with a custom Comparator as well.
Which is recommended by the official doc http://opencsv.sourceforge.net/#mapping_strategies
File reportFile = new File(reportOutputDir + "/" + REPORT_FILENAME);
Writer writer = new PrintWriter(reportFile);
final List<String> order = List.of("TradeID", "GWML GUID", "MXML GUID", "GWML File", "MxML File", "MxML Counterparty", "GWML Counterparty");
final FixedOrderComparator comparator = new FixedOrderComparator(order);
HeaderColumnNameMappingStrategy<MappingsBean> strategy = new HeaderColumnNameMappingStrategy<>();
strategy.setType(MappingsBean.class);
strategy.setColumnOrderOnWrite(comparator);
StatefulBeanToCsv<MappingsBean> beanToCsv = new
StatefulBeanToCsvBuilder(writer)
.withMappingStrategy(strategy)
.build();
beanToCsv.write(makeFinalMappingBeanList());
writer.close();
CustomMappingStrategy for generic class.
public class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
super.setColumnMapping(new String[ FieldUtils.getAllFields(bean.getClass()).length]);
final int numColumns = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader(bean);
}
String[] header = new String[numColumns + 1];
BeanField<T> beanField;
for (int i = 0; i <= numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField<T> beanField) {
if (beanField == null || beanField.getField() == null
|| beanField.getField().getDeclaredAnnotationsByType(CsvBindByName.class).length == 0) {
return StringUtils.EMPTY;
}
final CsvBindByName bindByNameAnnotation = beanField.getField()
.getDeclaredAnnotationsByType(CsvBindByName.class)[0];
return bindByNameAnnotation.column();
}
}
POJO Class
public class Customer{
#CsvBindByPosition(position=1)
#CsvBindByName(column="CUSTOMER", required = true)
private String customer;
}
Client Class
List<T> data = getEmployeeRecord();
CustomMappingStrategy custom = new CustomMappingStrategy();
custom.setType(Employee.class);
StatefulBeanToCsv<T> writer = new StatefulBeanToCsvBuilder<T>(response.getWriter())
.withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.withSeparator('|')
.withOrderedResults(false)
.withMappingStrategy(custom)
.build();
writer.write(reportData);
There is another version for 5.2 version because I have a problem with #CsvCustomBindByName annotation when I tried answers above.
I defined custom annotation :
#Target(ElementType.FIELD)
#Inherited
#Retention(RetentionPolicy.RUNTIME)
public #interface CsvPosition {
int position();
}
and custom mapping strategy
public class CustomMappingStrategy<T> extends HeaderColumnNameMappingStrategy<T> {
private final Field[] fields;
public CustomMappingStrategy(Class<T> clazz) {
fields = clazz.getDeclaredFields();
Arrays.sort(fields, (f1, f2) -> {
CsvPosition position1 = f1.getAnnotation(CsvPosition.class);
CsvPosition position2 = f2.getAnnotation(CsvPosition.class);
return Integer.compare(position1.position(), position2.position());
});
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] header = new String[fields.length];
for (Field f : fields) {
CsvPosition position = f.getAnnotation(CsvPosition.class);
header[position.position() - 1] = getName(f);
}
headerIndex.initializeHeaderIndex(header);
return header;
}
private String getName(Field f) {
CsvBindByName csvBindByName = f.getAnnotation(CsvBindByName.class);
CsvCustomBindByName csvCustomBindByName = f.getAnnotation(CsvCustomBindByName.class);
return csvCustomBindByName != null
? csvCustomBindByName.column() == null || csvCustomBindByName.column().isEmpty() ? f.getName() : csvCustomBindByName.column()
: csvBindByName.column() == null || csvBindByName.column().isEmpty() ? f.getName() : csvBindByName.column();
}
}
My POJO beans are annotated like this
public class Record {
#CsvBindByName(required = true)
#CsvPosition(position = 1)
Long id;
#CsvCustomBindByName(required = true, converter = BoolanCSVField.class)
#CsvPosition(position = 2)
Boolean deleted;
...
}
and final code for writer :
CustomMappingStrategy<Record> mappingStrategy = new CustomMappingStrategy<>(Record.class);
mappingStrategy.setType(Record.class);
StatefulBeanToCsv beanToCsv = new StatefulBeanToCsvBuilder(writer)
.withApplyQuotesToAll(false)
.withOrderedResults(true)
.withMappingStrategy(mappingStrategy)
.build();
I hope it will helpful for someone
Here is the code to add support for #CsvBindByPosition based ordering to default HeaderColumnNameMappingStrategy. Tested for latest version 5.2
Approach is to store 2 map. First headerPositionMap to store the position element so same can used to setColumnOrderOnWrite , second columnMap from which we can lookup actual column name rather than capitalized one
public class HeaderColumnNameWithPositionMappingStrategy<T> extends HeaderColumnNameMappingStrategy<T> {
protected Map<String, String> columnMap;
#Override
public void setType(Class<? extends T> type) throws CsvBadConverterException {
super.setType(type);
columnMap = new HashMap<>(this.getFieldMap().values().size());
Map<String, Integer> headerPositionMap = new HashMap<>(this.getFieldMap().values().size());
for (Field field : type.getDeclaredFields()) {
if (field.isAnnotationPresent(CsvBindByPosition.class) && field.isAnnotationPresent(CsvBindByName.class)) {
int position = field.getAnnotation(CsvBindByPosition.class).position();
String colName = "".equals(field.getAnnotation(CsvBindByName.class).column()) ? field.getName() : field.getAnnotation(CsvBindByName.class).column();
headerPositionMap.put(colName.toUpperCase().trim(), position);
columnMap.put(colName.toUpperCase().trim(), colName);
}
}
super.setColumnOrderOnWrite((String o1, String o2) -> {
if (!headerPositionMap.containsKey(o1) || !headerPositionMap.containsKey(o2)) {
return 0;
}
return headerPositionMap.get(o1) - headerPositionMap.get(o2);
});
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
String[] headersRaw = super.generateHeader(bean);
return Arrays.stream(headersRaw).map(h -> columnMap.get(h)).toArray(String[]::new);
}
}
If you dont have getDeclaredAnnotationsByType method, but need the name of your original field name:
beanField.getField().getName()
public class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
#Override
public String[] generateHeader() {
final int numColumns = findMaxFieldIndex();
if (!isAnnotationDriven() || numColumns == -1) {
return super.generateHeader();
}
header = new String[numColumns + 1];
BeanField beanField;
for (int i = 0; i <= numColumns; i++) {
beanField = findField(i);
String columnHeaderName = extractHeaderName(beanField);
header[i] = columnHeaderName;
}
return header;
}
private String extractHeaderName(final BeanField beanField) {
if (beanField == null || beanField.getField() == null || beanField.getField().getDeclaredAnnotations().length == 0) {
return StringUtils.EMPTY;
}
return beanField.getField().getName();
}
}
Try something like below:
private static class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
String[] header;
public CustomMappingStrategy(String[] cols) {
header = cols;
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
return header;
}
}
Then use it as follows:
String[] columns = new String[]{"Name", "Age", "Company", "Salary"};
CustomMappingStrategy<Employee> mappingStrategy = new CustomMappingStrategy<Employee>(columns);
Where columns are columns of your bean and Employee is your bean
Great thread, I don't have any annotations in my pojo and this is how I did based on all the previous answers. Hope it helps others.
OpenCsv Version: 5.0
List readVendors = getFromMethod();
String[] fields= {"id","recordNumber","finVendorIdTb","finVenTechIdTb","finShortNameTb","finVenName1Tb","finVenName2Tb"};
String[] csvHeader= {"Id#","Shiv Record Number","Shiv Vendor Id","Shiva Tech Id#","finShortNameTb","finVenName1Tb","finVenName2Tb"};
CustomMappingStrategy<FinVendor> mappingStrategy = new CustomMappingStrategy(csvHeader);//csvHeader as per custom header irrespective of pojo field name
mappingStrategy.setType(FinVendor.class);
mappingStrategy.setColumnMapping(fields);//pojo mapping fields
StatefulBeanToCsv<FinVendor> beanToCsv = new StatefulBeanToCsvBuilder<FinVendor>(writer).withQuotechar(CSVWriter.NO_QUOTE_CHARACTER).withMappingStrategy(mappingStrategy).build();
beanToCsv.write(readVendors);
//custom mapping class as mentioned in the thread by many users
private static class CustomMappingStrategy extends ColumnPositionMappingStrategy {
String[] header;
public CustomMappingStrategy(String[] cols) {
header = cols;
}
#Override
public String[] generateHeader(T bean) throws CsvRequiredFieldEmptyException {
super.generateHeader(bean);
return header;
}
}
Output:
Id# Shiv Record Number Shiv Vendor Id Fin Tech Id# finShortNameTb finVenName1Tb finVenName2Tb finVenDefaultLocTb
1 VEN00053 678 33316025986 THE ssOHIO S_2 THE UNIVERSITY CHK Test
2 VEN02277 1217 3044374205 Fe3 MECHA_1 FR3INC EFT-1
3 VEN03118 1310 30234484121 PE333PECTUS_1 PER332CTUS AR EFT-1 Test
The sebast26's first solution worked for me but for opencsv version 5.2 it requires a little change in the CustomMappingStrategy class:
class CustomMappingStrategy<T> extends ColumnPositionMappingStrategy<T> {
private static final String[] HEADER = new String[]{"TradeID", "GWML GUID", "MXML GUID", "GWML File", "MxML File", "MxML Counterparty", "GWML Counterparty"};
#Override
public String[] generateHeader() {
super.generateHeader(bean); // without this the file contains ONLY headers
return HEADER;
}
}
In case you need this to preserve column ordering from the original CSV: use a HeaderColumnNameMappingStrategy for reading, then use the same strategy for writing. "Same" in this case meaning not just the same class, but really the same object.
From the javadoc of StatefulBeanToCsvBuilder.withMappingStrategy:
It is perfectly legitimate to read a CSV source, take the mapping
strategy from the read operation, and pass it in to this method for a
write operation. This conserves some processing time, but, more
importantly, preserves header ordering.
This way you will get a CSV including headers, with columns in the same order as the original CSV.
Worked for me using OpenCSV 5.4.
It took me time also but I found the solution.
Add these annotations to your POJO: #CsvBindByName, #CsvBindByPosition with the right name and position of each object.
My POJO:
#JsonIgnoreProperties(ignoreUnknown = true)
#Getter
#Setter
public class CsvReport {
#CsvBindByName(column = "Campaign")
#CsvBindByPosition(position = 0)
private String program;
#CsvBindByName(column = "Report")
#CsvBindByPosition(position = 1)
private String report;
#CsvBindByName(column = "Metric Label")
#CsvBindByPosition(position = 2)
private String metric;
}
And add this code (my POJO called CsvReport):
ColumnPositionMappingStrategy<CsvReport> mappingStrategy = new ColumnPositionMappingStrategyBuilder<CsvReport>().build();
mappingStrategy.setType(CsvReport.class);
//add your headers in the sort you want to be in the file:
String[] columns = new String[] { "Campaign", "Report", "Metric Label"};
mappingStrategy.setColumnMapping(columns);
//Write your headers first in your chosen Writer:
Writer responseWriter = response.getWriter();
responseWriter.append(String.join(",", columns)).append("\n");
// Configure the CSV writer builder
StatefulBeanToCsv<CsvReport> writer = new StatefulBeanToCsvBuilder<CsvReport>(responseWriter)
.withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.withSeparator(CSVWriter.DEFAULT_SEPARATOR)
.withOrderedResults(true) //I needed to keep the order, if you don't put false.
.withMappingStrategy(mappingStrategy)
.build();
String fileName = "your file name";
response.setHeader(HttpHeaders.CONTENT_DISPOSITION,String.format("attachment; filename=%s", fileName));
writer.write(csvReports);
This will create a new CSV file with your printed headers and ordered fields.

How to read a untidy csv file in java and create a corresponding ArrayList of an object?

I want to read this csvFile into an array of Flight class objects in which each index will refer to an object containing a record from the csvFile.
Here is a blueprint of Flight class. Its not complete so I am only providing the data memebers.
public class Flight {
private String flightID;
private String source;
private String destination;
private <some clas to handle time > dep;
private <some clas to handle time> arr;
private String[] daysOfWeek;
private <some clas to handle date> efff;
private <some clas to handle date> efft;
private <some clas to handle dates> exc;
}
I want to implement a function something like :
public class DataManager {
public List<Flight> readSpiceJet() {
return new ArrayList<Flight>();
}
}
Feel free to modify this and please help me. :)
Thanks in advance.
You can try OpenCSV Framework.
Have a look at this example:
import java.io.FileReader;
import java.util.List;
import com.opencsv.CSVReader;
import com.opencsv.bean.ColumnPositionMappingStrategy;
import com.opencsv.bean.CsvToBean;
public class ParseCSVtoJavaBean
{
public static void main(String args[])
{
CSVReader csvReader = null;
try
{
/**
* Reading the CSV File
* Delimiter is comma
* Default Quote character is double quote
* Start reading from line 1
*/
csvReader = new CSVReader(new FileReader("Employee.csv"),',','"',1);
//mapping of columns with their positions
ColumnPositionMappingStrategy mappingStrategy =
new ColumnPositionMappingStrategy();
//Set mappingStrategy type to Employee Type
mappingStrategy.setType(Employee.class);
//Fields in Employee Bean
String[] columns = new String[]{"empId","firstName","lastName","salary"};
//Setting the colums for mappingStrategy
mappingStrategy.setColumnMapping(columns);
//create instance for CsvToBean class
CsvToBean ctb = new CsvToBean();
//parsing csvReader(Employee.csv) with mappingStrategy
List empList = ctb.parse(mappingStrategy,csvReader);
//Print the Employee Details
for(Employee emp : empList)
{
System.out.println(emp.getEmpId()+" "+emp.getFirstName()+" "
+emp.getLastName()+" "+emp.getSalary());
}
}
catch(Exception ee)
{
ee.printStackTrace();
}
finally
{
try
{
//closing the reader
csvReader.close();
}
catch(Exception ee)
{
ee.printStackTrace();
}
}
}
}
EDIT 1:
To parse dates:
String dateString;
Date date;
public void setDateString(String dateString) {
// This method can parse the dateString and set date object as well
}
public void setDate(Date date) {
//parse here
}

Categories

Resources