how to fetch and validate csv header in open csv? - java

I want to fetch header from csv file . If I am not use this skipLines then I will get header at 0 index array . But I want to fetch header directly using HeaderColumnNameMappingStrategy but it will not work with my code.
I also want to validate header column list ( like csv had not allowed to contain extra column)
I had also check this How to validate the csv headers using opencsv but it was not helpful to me.
#SuppressWarnings({ "unchecked", "rawtypes" })
public Map<String, Object> handleStockFileUpload(MultipartFile file, Long customerId) {
Map<String, Object> responseMap = new HashMap<>();
responseMap.put("datamap", "");
responseMap.put("errormap", "");
responseMap.put("errorkeys", "");
List<Map<String, Integer>> list = new ArrayList<>();
List<StockCsvDTO> csvStockList = new ArrayList<>();
try {
String fileName = new SimpleDateFormat("yyyy_MM_dd_HHmmss").format(new Date()) + "_" + file.getOriginalFilename();
responseMap.put("filename", fileName);
File stockFile = new File(productsUploadFilePath + fileName);
stockFile.getParentFile().mkdirs();
FileOutputStream fos = new FileOutputStream(stockFile);
fos.write(file.getBytes());
fos.close();
CsvTransfer csvTransfer = new CsvTransfer();
ColumnPositionMappingStrategy ms = new ColumnPositionMappingStrategy();
ms.setType(StockCsv.class);
Reader reader = Files.newBufferedReader(Paths.get(productsUploadFilePath + fileName));
CSVReader csvReader = new CSVReader(reader);
CsvToBean cb = new CsvToBeanBuilder(reader)
.withType(StockCsv.class)
.withMappingStrategy(ms)
.withSkipLines(1)
.build();
csvTransfer.setCsvList(cb.parse());
reader.close();
csvStockList = csvTransfer.getCsvList();
} catch (Exception e) {
e.printStackTrace();
responseMap.put("status", "servererror");
}
responseMap.put("datamap", csvStockList);
return responseMap;
}

I found the following solution:
Use #CsvBindByName with HeaderColumnNameMappingStrategy,e.g. annotate your bean properties with #CsvBindByName:
public static class HollywoodActor {
private int id;
#CsvBindByName(column = "First Name")
private String firstName;
#CsvBindByName(column = "Last Name")
private String lastName;
// getter / setter
}
Add a method like this:
public class CsvParser {
public <T> ParseResult<T> parseByPropertyNames(Reader csvReader, Class<T> beanClass) throws IOException {
CSVReader reader = new CSVReaderBuilder(csvReader).withCSVParser(new
CSVParserBuilder().build()).build();
CsvToBean<T> bean = new CsvToBean();
HeaderColumnNameMappingStrategy<T> mappingStrategy = new HeaderColumnNameMappingStrategy();
mappingStrategy.setType(beanClass);
bean.setMappingStrategy(mappingStrategy);
bean.setCsvReader(reader);
List<T> beans = bean.parse();
return new CsvParseResult<>(mappingStrategy.generateHeader(), beans);
}
and also don't forget to add public class ParseResult
public class ParseResult <T> {
private final String[] headers;
private final List<T> lines;
// all-args constructor & getters
}
Use then use them in your code:
String csv = "Id,First Name,Last Name\n" + "1, \"Johnny\", \"Depp\"\n" + "2, \"Al\", \"Pacino\"";
CsvParseResult<HollywoodActor> parseResult = parser
.parseByPropertyNames(new InputStreamReader(new ByteArrayInputStream(csv.getBytes(StandardCharsets.UTF_8), HollywoodActor.class)));
From ParseResult.headers you can get actual headers from which were in your .csv file. Just compare them with what's expected.
Hope that helps!

Here I was comparing my csvHeader with originalHeader:
List<String> originalHeader = fileUploadUtility.getHeader(new StockCsv());
List<String> invalidHeader = csvHeader.stream().filter(o -> (originalHeader.stream().filter(f -> f.equalsIgnoreCase(o)).count()) < 1).collect(Collectors.toList());
if(null != invalidHeader && invalidHeader.size() > 0 && invalidHeader.toString().replaceAll("\\[\\]", "").length() > 0) {
msg = "Invalid column(s) : " + invalidHeader.toString().replace(", ]", "]") + ". Please remove invalid column(s) from file.";
resultMap.put(1, msg);
}
public List<String> getHeader(T pojo) {
// TODO Auto-generated method stub
final CustomMappingStrategy<T> mappingStrategy = new CustomMappingStrategy<>();
mappingStrategy.setType((Class<? extends T>) pojo.getClass());
String header[] = mappingStrategy.generateHeader();
List<String> strHeader = Arrays.asList(header);
return strHeader;
}

Here is an alternative to your present problem.First, define what you expect your headers to look like. For example:
public static final ArrayList<String> fileFormat = new ArrayList<> (Arrays.asList("Values1", "Values2", "Values3", "Values4"));
Now, write a method to return custom errors if any exist:
public String validateCsvFileDetails(MultipartFile file, Set<String> requiredHeadersArray) {
Set<String> errors = new HashSet<>();
try {
InputStream stream = file.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(stream));
String headerLine = reader.readLine();
if (Objects.isNull(headerLine))
return "The file has no headers, please ensure it has the correct upload format";
List<String> headersInFileList;
String[] headersInFileArray;
if (headerLine.contains(",")) {
headersInFileArray = StringUtils.split(headerLine, ",");
headersInFileList = Arrays.asList(headersInFileArray);
} else//the headerline has only one headerfield
{
headersInFileList = Collections.singletonList(headerLine);
}
for (String header : requiredHeadersArray) {
if (!headersInFileList.contains(header))
errors.add("The file has the wrong header format, please ensure " + header + " header is present");
}
//if there are errors, return it
if (!errors.isEmpty())
return sysUtils.getStringFromSet(errors);
//Ensure the csv file actually has values after the header, but don't read beyond the first line
String line;
int counter = 0;
while ((line = reader.readLine()) != null) {
counter++;
if (counter > 0)
break;
}
//if line is null return validation error
if (Objects.isNull(line))
return "Cannot upload empty file";
} catch (Exception e) {
logger.error(new Object() {
}.getClass().getEnclosingMethod().getName(), e);
return "System Error";
}
return null;
}
Now you can validate you file headers as follows:
String errors = validateCsvFileDetails(file, new HashSet<>(fileFormat));
if (errors != null)
return error
//proceed

Give this a try using captureHeader as a pre-filter:
...
private class CustomHeaderColumnNameMappingStrategy<T> extends HeaderColumnNameMappingStrategy {
private String[] expectedHeadersOrdered = {"Column1", "Column2", "Column3", "Column4", "Column5"};
#Override
public void captureHeader(CSVReader reader) throws IOException, CsvRequiredFieldEmptyException {
String[] actualCsvHeaders = reader.peek();
String actualHeader, expectedHeader;
if (expectedHeadersOrdered.length > actualCsvHeaders.length) {
throw new CsvRequiredFieldEmptyException("Missing header column.");
} else if (expectedHeadersOrdered.length < actualCsvHeaders.length) {
throw new IOException("Unexpected extra header column.");
}
// Enforce strict column ordering with index
// TODO: you might want to employ simple hashMap, List, set, etc. as needed
for (int i=0; i<actualCsvHeaders.length; i++) {
actualHeader = actualCsvHeaders[i];
expectedHeader = expectedHeadersOrdered[i];
if ( ! expectedHeader.equals(actualHeader) ) {
throw new IOException("Header columns mismatch in ordering.");
}
}
super.captureHeader(reader); // Back to default processing if the headers include ordering are as expected
}
}
CustomHeaderColumnNameMappingStrategy yourMappingStrategy = new CustomHeaderColumnNameMappingStrategy<YourPOJO>();
ourMappingStrategy.setType(YourPOJO.class);
try {
pojosFromCsv = new CsvToBeanBuilder<YourPOJO>(new FileReader(csvFile))
.withType(YourPOJO.class)
.withMappingStrategy(yourMappingStrategy)
.build();
pojosFromCsv.stream();
}
Inspired by Using captureHeader in OpenCSV

Related

Append to existing CSV with headers

I have a method that appends to a .csv file but the problem is that it adds a header row everytime as well. How can I append to the .csv correctly?
I am aware that adding to a List would do the job but this method is called in separate runs.
public static void writeToCSVFileAndSend(String facilityId, int candidateStockTakeContainersCount) throws IOException {
FileWriter report = new FileWriter("/tmp/MonthlyExpectedComplianceSuggestions.csv", true);
LocalDate today = java.time.LocalDate.now();
String[] headers = { "Warehouse", "Expected Count for "+ today.getMonth().getDisplayName(TextStyle.SHORT, Locale.ENGLISH)};
Map<String, Integer> facilityExpectedMonthlyCountMap= new HashMap<String, Integer>() {
{
put(facilityId, candidateStockTakeContainersCount);
}
};
try (CSVPrinter printer = new CSVPrinter(report, CSVFormat.DEFAULT
.withHeader(headers))) {
facilityExpectedMonthlyCountMap.forEach((a, b) -> {
try {
printer.printRecord(a, b);
} catch (IOException e) {
e.printStackTrace();
}
});
}
}
Current Output
Warehouse,Expected Count for Dec
A,2147
Warehouse,Expected Count for Dec
B,0
Expected Output
Warehouse,Expected Count for Dec
A,2147
B,0
To avoid multiple headers, you should create object of CSVPrinter once and reuse it
Depending on how you are getting the data, you may split the function in two and pass CSVPrinter object around.
public static void writeToCSVFileAndSend() throws IOException
{
File outputCSV = new File( "/tmp/MonthlyExpectedComplianceSuggestions.csv");
LocalDate today = java.time.LocalDate.now();
String[] headers = { "Warehouse", "Expected Count for "+ today.getMonth().getDisplayName(TextStyle.SHORT, Locale.ENGLISH)};
boolean headerRequired = true;
if( outputCSV.exists()){
headerRequired = false;
}
CSVPrinter printer = null;
if( headerRequired){
printer = new CSVPrinter(report, CSVFormat.DEFAULT.withHeader(headers));
}
else{
printer = new CSVPrinter(report);
}
// Iterate through combination of facilityId and candidateStockTakeContainersCount and
// call print record
Map<String, Integer> facilityExpectedMonthlyCountMap= new HashMap<String, Integer>();
// fill in your data in map here
facilityExpectedMonthlyCountMap.forEach((a, b) -> {
try {
printer.printRecord(a, b);
} catch (IOException e) {
e.printStackTrace();
}
});
}

How can I sort a Ranking list using a specific column from a file and print the whole file sorted?Java

Already done this but can't make it work.
Also tried to create another while ((line = br.readLine()) != null) {}, and placed the sort before it, but it won't read this while so it wouldnt print anithing.
The file looks like this:
1-Fred-18-5-0
2-luis-12-33-0
3-Helder-23-10-0
And wanted it to print like this:
2-luis-12-33-0
3-Helder-23-10-0
1-Fred-18-5-0
public static void lerRanking() throws IOException {
File ficheiro = new File("jogadores.txt");
BufferedReader br = new BufferedReader(new FileReader(ficheiro));
List<Integer> jGanhos = new ArrayList<Integer>();
int i = 0;
String line;
String texto = "";
while ((line = br.readLine()) != null) {
String[] col = line.split("-");
int colunas = Integer.parseInt(col[3]);
jGanhos.add(colunas);
i++;
if(i>=jGanhos.size()){
Collections.sort(jGanhos);
Collections.reverse(jGanhos);
for (int j = 0; j < jGanhos.size(); j++) {
if(colunas == jGanhos.get(i)){
texto = texto + line + "\n";
}
}
}
}
PL(texto);
}
Make it step by step:
public static void lerRanking() throws IOException {
File ficheiro = new File("jodagores.txt");
// read file
BufferedReader br = new BufferedReader(new FileReader(ficheiro));
List<String> lines = new ArrayList<>();
String line;
while ((line = br.readLine()) != null) {
lines.add(line);
}
// sort lines
lines.sort(new Comparator<String>() {
#Override
public int compare(String s1, String s2) {
// sort by 3rd column descending
return Integer.parseInt(s2.split("-")[3]) - Integer.parseInt(s1.split("-")[3]);
}
});
// concat lines
String texto = "";
for (String l : lines) {
texto += l + "\n";
}
System.out.println(texto);
// PL(texto);
}
Okay so first of all I thounk you should introduce a Java class (in my code this is ParsedObject) to manage your objects.
Second it should implement the Comparable<ParsedObject> interface, so you can easily sort it from anywhere in the code (without passing a custom comparator each time).
Here is the full code:
import java.io.*;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Main {
public static void main(String[] args) throws IOException {
lerRanking();
}
public static void lerRanking() throws IOException {
File ficheiro = new File("jodagores.txt");
// read lines to a list
List<String> lines = readLines(ficheiro);
// parse them to a list of objects
List<ParsedObject> objects = ParsedObject.from(lines);
// sort
Collections.sort(objects);
// print the output
writeLines(objects);
}
public static List<String> readLines(File ficheiro) throws IOException {
// read file line by line
BufferedReader br = new BufferedReader(new FileReader(ficheiro));
List<String> lines = new ArrayList<>();
String line;
while((line = br.readLine()) != null) {
lines.add(line);
}
br.close(); // THIS IS IMPORTANT never forget to close a Reader :)
return lines;
}
private static void writeLines(List<ParsedObject> objects) throws IOException {
File file = new File("output.txt");
BufferedWriter bw = new BufferedWriter(new FileWriter(file));
for(ParsedObject object : objects) {
// print the output line by line
bw.write(object.originalLine);
}
bw.flush();
bw.close(); // THIS IS IMPORTANT never forget to close a Writer :)
}
// our object that holds the information
static class ParsedObject implements Comparable<ParsedObject> {
// the original line, if needed
public String originalLine;
// the columns
public Integer firstNumber;
public String firstString;
public Integer secondNumber;
public Integer thirdNumber;
public Integer fourthNumber;
// parse line by line
public static List<ParsedObject> from(List<String> lines) {
List<ParsedObject> objects = new ArrayList<>();
for(String line : lines) {
objects.add(ParsedObject.from(line));
}
return objects;
}
// parse one line
public static ParsedObject from(String line) {
String[] splitLine = line.split("-");
ParsedObject parsedObject = new ParsedObject();
parsedObject.originalLine = line + "\n";
parsedObject.firstNumber = Integer.valueOf(splitLine[0]);
parsedObject.firstString = splitLine[1];
parsedObject.secondNumber = Integer.valueOf(splitLine[2]);
parsedObject.thirdNumber = Integer.valueOf(splitLine[3]);
parsedObject.fourthNumber = Integer.valueOf(splitLine[4]);
return parsedObject;
}
#Override
public int compareTo(ParsedObject other) {
return other.thirdNumber.compareTo(this.thirdNumber);
}
}
}
If you have any more question feel free to ask :) An here is an the example objects list after parsing and sorting.
The easiest way is to first create a class that will hold the data from your file provided your lines keep the same format
public class MyClass {
private Integer column1;
private String column2;
private Integer column3;
private Integer column4;
private Integer column5;
public MyClass(String data) {
String[] cols = data.split("-");
if (cols.length != 5) return;
column1 = Integer.parseInt(cols[0]);
column2 = cols[1];
column3 = Integer.parseInt(cols[2]);
column4 = Integer.parseInt(cols[3]);
column5 = Integer.parseInt(cols[4]);
}
public synchronized final Integer getColumn1() {
return column1;
}
public synchronized final String getColumn2() {
return column2;
}
public synchronized final Integer getColumn3() {
return column3;
}
public synchronized final Integer getColumn4() {
return column4;
}
public synchronized final Integer getColumn5() {
return column5;
}
#Override
public String toString() {
return String.format("%d-%s-%d-%d-%d", column1, column2, column3, column4, column5);
}
}
Next you can get a list of your items like this:
public static List<MyClass> getLerRanking() throws IOException {
List<MyClass> items = Files.readAllLines(Paths.get("jogadores.txt"))
.stream()
.filter(line -> !line.trim().isEmpty())
.map(data -> new MyClass(data.trim()))
.filter(data -> data.getColumn4() != null)
.sorted((o1, o2) -> o2.getColumn4().compareTo(o1.getColumn4()))
.collect(Collectors.toList());
return items;
}
This will read your whole file, filter out any blank lines, then parse the data and convert it to MyClass.
It will then make sure that column4 isn't null in the converted objects.
Finally it will reverse sort the objects based off from the value in column 4 and create a list of those items.
To print the results you can do something like this
public static void main(String[] args) {
List<MyClass> rankingList = getLerRanking();
rankingList.forEach(item -> System.out.println(item));
}
Since we overrode the toString() method, it will print it out the object as it is displayed in the file.
Hope this helps.

Adding String to a list in Java

Please have a look at the below code snippet.
I had a look at some solutions provided on stackoverflow for adding String to a list.
They did not work out well in the below case.
#RequestMapping(value = "/rest/EmployeeDept/", method = RequestMethod.GET)
// ResponseEntity is meant to represent the entire HTTP response
public ResponseEntity<EmployeeDeptResponse> getDept()
{
EmployeeDeptResponse deptResponse = new EmployeeDeptResponse();
HttpStatus httpStatus;
List<EmployeeDept> employeeDeptList = new ArrayList<EmployeeDept>();
try {
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpGet getRequest = new HttpGet(
"http://localhost:8082/rest/EmployeeDept/");
getRequest.addHeader("accept", "application/json");
HttpResponse response = httpClient.execute(getRequest);
if (response.getStatusLine().getStatusCode() != 200) {
throw new RuntimeException("Failed : HTTP error code : "
+ response.getStatusLine().getStatusCode());
}
BufferedReader br = new BufferedReader(
new InputStreamReader((response.getEntity().getContent())));
String output;
while ((output = br.readLine()) != null) {
employeeDeptList.add(output);
}
deptResponse.setItems(employeeDeptList);
httpClient.getConnectionManager().shutdown();
} catch (ClientProtocolException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
httpStatus = HttpStatus.OK;
return new ResponseEntity<EmployeeDeptResponse>(deptResponse,httpStatus);
}
I am getting an error in the while loop as "add in list can not be applied to java.lang.String"
The list of type "EmployeeDept".The EmployeeDept class looks like this:-
package com.springboot.postrgres.model;
import java.io.Serializable;
public class EmployeeDept implements Serializable {
private static final long serialVersionUID = 1L;
private int id;
private String dept;
public EmployeeDept() {
}
public EmployeeDept(int id, String dept) {
this.id = id;
this.dept = dept;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getDept() {
return dept;
}
public void setDept(String name) {
this.dept = dept;
}
}
In the above code I have a list "employeeDeptList" and a string "Output".
I need to add this string to the list.
Can any of you provide suitable suggestions.
Thanks in advance.
employeeDeptList is of type ArrayList<EmployeeDept>.
List<EmployeeDept> employeeDeptList = new ArrayList<EmployeeDept>();
output on the other hand is of type String
String output;
So when you do employeeDeptList.add(output);, you are trying to add a String to your employeeDeptList, when it should be an EmployeeDept.
So you either make output an EmployeeDept or you rethink what you want to do with it.
As a suggestion, I am going to assume that your output should contain the information you need to create an EmployeeDept. You probably want to parse that information and create a EmployeeDept dept = new EmployeeDept(parsedId, parsedDept); and then add it to employeeDeptList as employeeDeptList.add(dept);
employeeDeptList is a list of EmployeeDept object. You are trying to add a String to the list of EmployeeDept. Which is not posssible unless you change the type of output variable to EmployeeDept.
If you response is a valid json (specified header), why won't you try to map it to objects?
ObjectMapper mapper = new ObjectMapper();
//assuming your response entity content is a list of objects (json array, since you specified header 'application/json'
String jsonArray = String theString = IOUtils.toString(response.getEntity().getContent(), encoding);
employeeDeptList = List<Employee> list = mapper.readValue(jsonString, TypeFactory.defaultInstance().constructCollectionType(List.class, employeeDeptList.class));
//assuming your response is a single object
String json = String theString = IOUtils.toString(response.getEntity().getContent(), encoding);
employeeDeptList.add(mapper.readValue(json, Employee.class));
//assuming every line of content is an object (does not really make sense)
BufferedReader br = new BufferedReader(ew InputStreamReader((response.getEntity().getContent())));
String output;
while ((output = br.readLine()) != null) {
employeeDeptList.add(mapper.readValue(output, Employee.class));
}
There is a problem in your code.
while ((output = br.readLine()) != null) {
employeeDeptList.add(output);
}
output is a String and you are trying to add that to a List<EmployeeDept>. You can't do that. If you want to add output to a List, you should create a List of Strings. Something like List<String>
As you mentioned, what you are getting is,
{
"1499921014230": {
"id": 1499921014230,
"dept": "mechanics"
},
"1499921019747": {
"id": 1499921019747,
"dept": "civil"
}
}
If you can change that , you can try to change it to a simple array of objects,
[
{
"id": 1499921014230,
"dept": "mechanics"
},
{
"id": 1499921019747,
"dept": "civil"
}
]
Add below dependency if you use maven, or just add the .jar to the lib,
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20090211</version>
</dependency>
Then try something like this,
while ((output = br.readLine()) != null) {
JSONArray jsonArr = new JSONArray(output);
for (int i = 0; i < jsonArr.length(); i++) {
JSONObject jsonObj = jsonArr.getJSONObject(i);
String dept = jsonObj.getString("dept");
int id = jsonObj.getInt("id");
System.out.println("id : " + id + " dept : " + dept);
employeeDeptList.add(new EmployeeDept(id, dept));
}
}

uniVocity doesn't parse the first column into beans

I'm trying to read CSV files from GTFS.zip with help of uniVocity-parsers and run into an issue that I can't figure out. For some reason it seems the first column of some CSV files won't be parsed correctly. For example in the "stops.txt" file that looks like this:
stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station
"de:3811:30215:0:6","Freiburg Stübeweg","48.0248455941735","7.85563688037231","","Parent30215"
"de:8311:30054:0:1","Freiburg Schutternstraße","48.0236251356332","7.72434519425597","","Parent30054"
"de:8311:30054:0:2","Freiburg Schutternstraße","48.0235446600679","7.72438739944883","","Parent30054"
The "stop_id" field won't be parsed correctly will have the value "null"
This is the method I'm using to read the file:
public <T> List<T> readCSV(String path, String file, BeanListProcessor<T> processor) {
List<T> content = null;
try {
// Get zip file
ZipFile zip = new ZipFile(path);
// Get CSV file
ZipEntry entry = zip.getEntry(file);
InputStream in = zip.getInputStream(entry);
CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.setProcessor(processor);
parserSettings.setHeaderExtractionEnabled(true);
CsvParser parser = new CsvParser(parserSettings);
parser.parse(new InputStreamReader(in));
content = processor.getBeans();
zip.close();
return content;
} catch (Exception e) {
e.printStackTrace();
}
return content;
}
And this is how my Stop Class looks like:
public class Stop {
#Parsed
private String stop_id;
#Parsed
private String stop_name;
#Parsed
private String stop_lat;
#Parsed
private String stop_lon;
#Parsed
private String location_type;
#Parsed
private String parent_station;
public Stop() {
}
public Stop(String stop_id, String stop_name, String stop_lat, String stop_lon, String location_type,
String parent_station) {
this.stop_id = stop_id;
this.stop_name = stop_name;
this.stop_lat = stop_lat;
this.stop_lon = stop_lon;
this.location_type = location_type;
this.parent_station = parent_station;
}
// --------------------- Getter --------------------------------
public String getStop_id() {
return stop_id;
}
public String getStop_name() {
return stop_name;
}
public String getStop_lat() {
return stop_lat;
}
public String getStop_lon() {
return stop_lon;
}
public String getLocation_type() {
return location_type;
}
public String getParent_station() {
return parent_station;
}
// --------------------- Setter --------------------------------
public void setStop_id(String stop_id) {
this.stop_id = stop_id;
}
public void setStop_name(String stop_name) {
this.stop_name = stop_name;
}
public void setStop_lat(String stop_lat) {
this.stop_lat = stop_lat;
}
public void setStop_lon(String stop_lon) {
this.stop_lon = stop_lon;
}
public void setLocation_type(String location_type) {
this.location_type = location_type;
}
public void setParent_station(String parent_station) {
this.parent_station = parent_station;
}
#Override
public String toString() {
return "Stop [stop_id=" + stop_id + ", stop_name=" + stop_name + ", stop_lat=" + stop_lat + ", stop_lon="
+ stop_lon + ", location_type=" + location_type + ", parent_station=" + parent_station + "]";
}
}
If I call the method i get this output which is not correct:
PartialReading pr = new PartialReading();
List<Stop> stops = pr.readCSV("VAGFR.zip", "stops.txt", new BeanListProcessor<Stop>(Stop.class));
for (int i = 0; i < 4; i++) {
System.out.println(stops.get(i).toString());
}
Output:
Stop [stop_id=null, stop_name=Freiburg Stübeweg, stop_lat=48.0248455941735, stop_lon=7.85563688037231, location_type=null, parent_station=Parent30215]
Stop [stop_id=null, stop_name=Freiburg Schutternstraße, stop_lat=48.0236251356332, stop_lon=7.72434519425597, location_type=null, parent_station=Parent30054]
Stop [stop_id=null, stop_name=Freiburg Schutternstraße, stop_lat=48.0235446600679, stop_lon=7.72438739944883, location_type=null, parent_station=Parent30054]
Stop [stop_id=null, stop_name=Freiburg Waltershofen Ochsen, stop_lat=48.0220902613143, stop_lon=7.7205756507492, location_type=null, parent_station=Parent30055]
Does anyone know why this happens and how I can fix it? This also happens in the "routes.txt" and "trips.txt" files that I tested.
This is the GTFS file : http://stadtplan.freiburg.de/sld/VAGFR.zip
If you print the headers you will notice that the first column doesn't look right. That's because you are parsing a file encoded using UTF-8 with a BOM marker.
Basically the file starts with a few bytes indicating what is the encoding. Until version 2.5.*, the parser didn't handle that internally, and you had to skip these bytes to get the correct output:
//... your code here
ZipEntry entry = zip.getEntry(file);
InputStream in = zip.getInputStream(entry);
if(in.read() == 239 & in.read() == 187 & in.read() == 191){
System.out.println("UTF-8 with BOM, bytes discarded");
}
CsvParserSettings parserSettings = new CsvParserSettings();
//...rest of your code here
The above hack will work on any version before 2.5.*, but you could also use Commons-IO provides a BOMInputStream for convenience and a more clean handling of this sort of thing - it's just VERY slow.
Updating to a recent version should take care of it automatically.
Hope it helps.

How can i show it at first?

I wrote a simple java application, I have a problem please help me;
I have a file (JUST EXAMPLE):
1.TXT
-------
SET MRED:NAME=MRED:0,MREDID=60;
SET BCT:NAME=BCT:0,NEPE=DCS,T2=5,DK0=KOR;
CREATE LCD:NAME=LCD:0;
-------
and this is my source code
import java.io.IOException;
import java.io.*;
import java.util.StringTokenizer;
class test1 {
private final int FLUSH_LIMIT = 1024 * 1024;
private StringBuilder outputBuffer = new StringBuilder(
FLUSH_LIMIT + 1024);
public static void main(String[] args) throws IOException {
test1 p=new test1();
String fileName = "i:\\1\\1.txt";
File file = new File(fileName);
BufferedReader br = new BufferedReader(new FileReader(file));
String line;
while ((line = br.readLine()) != null) {
StringTokenizer st = new StringTokenizer(line, ";|,");
while (st.hasMoreTokens()) {
String token = st.nextToken();
p.processToken(token);
}
}
p.flushOutputBuffer();
}
private void processToken(String token) {
if (token.startsWith("MREDID=")) {
String value = getTokenValue(token,"=");
outputBuffer.append("MREDID:").append(value).append("\n");
} else if (token.startsWith("DK0=")) {
String value = getTokenValue(token,"=");
outputBuffer.append("DK0=:").append(value).append("\n");
} else if (token.startsWith("NEPE=")) {
String value = getTokenValue(token,"=");
outputBuffer.append("NEPE:").append(value).append("\n");
}
if (outputBuffer.length() > FLUSH_LIMIT) {
flushOutputBuffer();
}
}
private String getTokenValue(String token,String find) {
int start = token.indexOf(find) + 1;
int end = token.length();
String value = token.substring(start, end);
return value;
}
private void flushOutputBuffer() {
System.out.print(outputBuffer);
outputBuffer = new StringBuilder(FLUSH_LIMIT + 1024);
}
}
I want this output :
MREDID:60
DK0=:KOR
NEPE:DCS
But this application show me this :
MREDID:60
NEPE:DCS
DK0=:KOR
please tell me how can i handle this , because of that DK0 must be at first and this is just a sample ; my real application has 14000 lines
Thanks ...
Instead of outputting the value when you read it, put it in a hashmap. Once you've read your entire file, output in the order you want by getting the values from the hashmap.
Use a HashTable to store the values and print from it in the desired order after parsing all tokens.
//initialize hash table
HashTable ht = new HashTable();
//instead of outputBuffer.append, put the values in to the table like
ht.put("NEPE", value);
ht.put("DK0", value); //etc
//print the values after the while loop
System.out.println("MREDID:" + ht.get("MREDID"));
System.out.println("DK0:" + ht.get("DK0"));
System.out.println("NEPE:" + ht.get("NEPE"));
Create a class, something like
class data {
private int mredid;
private String nepe;
private String dk0;
public void setMredid(int mredid) {
this.mredid = mredid;
}
public void setNepe(String nepe) {
this.nepe = nepe;
}
public void setDk0(String dk0) {
this.dk0 = dk0;
}
public String toString() {
String ret = "MREDID:" + mredid + "\n";
ret = ret + "DK0=:" + dk0 + "\n";
ret = ret + "NEPE:" + nepe + "\n";
}
Then change processToken to
private void processToken(String token) {
Data data = new Data();
if (token.startsWith("MREDID=")) {
String value = getTokenValue(token,"=");
data.setMredid(Integer.parseInt(value));
} else if (token.startsWith("DK0=")) {
String value = getTokenValue(token,"=");
data.setDk0(value);
} else if (token.startsWith("NEPE=")) {
String value = getTokenValue(token,"=");
data.setNepe(value);
}
outputBuffer.append(data.toString());
if (outputBuffer.length() > FLUSH_LIMIT) {
flushOutputBuffer();
}
}

Categories

Resources