I have a csv file in which I am able to insert the header for the first run, but when I again write the file the program is creating the header again. Is there a way to check if csv file has a header and if yes then to skip it?
You would have to read the first line and test if the first column matches the column header you expect. Since your code inserts the header, I'm assuming it knows what the header should look like. You can use this same variable in your header check. Something like:
String HEADER = "column1,column2,column3";
String COLUMN1 = HEADER.substring(0,HEADER.indexOf(",")+1); //Or just set it to "column1", but that would be violating the DRY principle!
//...Get line1, column1 from the file you are reading
if(!line1Column1.equals(COLUMN1))
{
out.write(HEADER);
}
// Print rows of data...
Are you using any framework to do that or you are doing it yourself.. A code snippet would help... or you can put a Boolean flag to check or hard match the first line with the standard header code to check it...
If you inserted the header, couldn't you make it start, for instance with a dash (#) and if present not to write again ?
Regards,
Stéphane
Are you simply appending the records to the existing file and in which case the program is appending the header after the prior write?
Can you simply check if the file exists and if it does and is not size zero, assume the header is already present?
Related
I using opencsv library in java and export csv. But i have problem. When i used string begin zero look like : 0123456 , when i export it remove 0 and my csv look like : 123456. Zero is missing. I using way :
"\"\t"+"0123456"+ "\""; but when csv export it look like : "0123456" . I don't want it. I want 0123456. I don't want edit from excel because some end user don't know how to edit. How to export csv using open csv and keep 0 begin string. Please help
I think it is not really the problem when generating CSV but the way excel treats the data when opened via explorer.
Tried this code, and viewed the CSV in a text editor ( not excel ), notice that it shows up correctly, though when opened in excel, leading 0s are lost !
CSVWriter writer = new CSVWriter(new FileWriter("yourfile.csv"));
// feed in your array (or convert your data to an array)
String[] entries = "0123131#21212#021213".split("#");
List<String[]> a = new ArrayList<>();
a.add(entries);
//don't apply quotes
writer.writeAll(a,false);
writer.close();
If you are really sure that you want to see the leading 0s for numeric values when excel is opened by user, then each cell entry be in format ="dataHere" format; see code below:
CSVWriter writer = new CSVWriter(new FileWriter("yourfile.csv"));
// feed in your array (or convert your data to an array)
String[] entries = "=\"0123131\"#=\"21212\"#=\"021213\"".split("#");
List<String[]> a = new ArrayList<>();
a.add(entries);
writer.writeAll(a);
writer.close();
This is how now excel shows when opening excel from windows explorer ( double clicking ):
But now, if we see the CSV in a text editor, with the modified data to "suit" excel viewing, it shows as :
Also see link :
format-number-as-text-in-csv-when-open-in-both-excel-and-notepad
have you tried to use String like this "'"+"0123456". ' char will mark number as text when parse into excel
For me OpenCsv works correctly ( vers. 5.6 ).
for example my csv file has a row as the following extract:
"999739059";;;"abcdefgh";"001024";
and opencsv reads the field "1024" as 001024 corretly. Of course I have mapped the field in a string, not in a Double.
But, if you still have problems, you can grab a simple yet powerful parser that fully adheres with RFC 4180 standard:
mykong.com
Mykong shows you some examples using opencsv directly and, in the end, he writes a simple parser to use if you don't want to import OpenCSV , and the parser works very well , and you can use it if you still have any problems.
So you have an easy-to-understand source code of a simple parser that you can modify as you want if you still have any problem or if you want to customize it for your needs.
I'm using Apache Commons CSV to read a CSV file. The file have an information about the file itself (date and time of generation) at the last line.
|XXXX |XXXXX|XXXXX|XXXX|
|XXXX |XXXXX|XXXXX|XXXX|
|File generation: 21/01/2019 17.34.00| | | |
So while parsing the file, I'm getting this as a record(obviously).
I'm wondering is there any way to get rid of it from parsing and does Apache Commons CSV have any provision to handle it.
It's a while loop and you wouldn't know when you get to the end until you get to the end. You have two options:
Bad option: Read it once and count the number of lines and then
when you read it the second time you can break the loop when you
reach (counter-1) line.
Good option: It seems like your files are pipe delimited so when
you're processing line by line simply make sure that
line.trim().spit("|").length() > 1 or in your case do some work as
long as the number of records per line is greater than 1. This will
ensure you don't apply your logic on the lines with just one column
which happens to be your last row aka footer.
Example taken from Apache commons and modified a litte
Reader in = new FileReader("path/to/file.csv");
Iterable<CSVRecord> records = CSVFormat.RFC4180.parse(in);
for (CSVRecord record : records) {
//all lines except the last will result greater than 1
if (record.size() > 1){
//do your work here
String columnOne = record.get(0);
String columnTwo = record.get(1);
}
}
Apache Commons CSV provides a function to ignore the header (https://commons.apache.org/proper/commons-csv/apidocs/org/apache/commons/csv/CSVFormat.html#withSkipHeaderRecord--), but don't offer a solution to ignore the footer. But you can simply get all records, except the last one by manually ignoring the last record.
I want to write a java code, that will basically parse the last number from the URL. Then I need to check, if that number is present in a excel. If not found show an error, else If present return me the row name.
Say URL: http://foxsd5174:3887/PD/outage/area/v1/device/40122480
and from the below excel I need to know, "40122480" falls under which category city or county?
City County Event Level
Device ID 40122480 277136436 268698851
to fetch value from URL i was thing of using the below code.
Please help me out.
Use this post in case you don't know how to get the last number of the url:
How to obtain the last path segment of an uri
To read the excel file use this tutorial https://www.callicoder.com/java-read-excel-file-apache-poi/ which uses apache poi library to get the job done.
Just compare the cell value with the last number of the url.
I wrote the below code to get the last digit of the URL.
public class smartoutage {
public static void main(final String[] args){
System.out.println(getLastBitFromUrl(
"http://goxsd5174:3807/PD/outage/areaEtr/v1/device/40122480?param=true"));
}
public static String getLastBitFromUrl(final String url){
return url.replaceFirst(".*/([^/?]+).*", "$1");
}
}
Output is 40122480
now i need to find if 40122480 is present in the excel and return me the row name for which it belong
City County Event Level
Device ID 40122480 277136436 268698851
Depending on the number of digits at the end of the URL you could parse it in multiple different ways. You could use regex, or substrings. To compare it to an excel file you could first convert the excel file into a csv (They are compatible) use BufferedReader along with FileReader to read in the file. Make an array of strings and using regex or any parsing method separate out the commas within the csv file you created. Then simply run a while loop that loops to EOF and check for the string you've parsed being "40122480" using the equals() method on type string and seeing if your string matches any of the strings you've just parsed from the file.
EDITED*
Rough and quick code I did in a few minutes to maybe help the parsing thing for you.
String parsedUrl[] = new String[8];
String url = "http://foxsd5174:3887/PD/outage/area/v1/device/40122480";
System.out.println(url);
parsedUrl = url.split("/");
System.out.println(parsedUrl[8]);
I am trying to write the name of a file into Accumulo. I am using accumulo-core-1.43.
For some reason, certain files seem to be written into Accumulo with trailing \x00 characters at the end of the name. The upload is coming through a Java servlet (using the jquery file upload plugin). In the servlet, I check the name of the file with a System.out.println and it looks normal, and I even tried unescaping the string with
org.apache.commons.lang.StringEscapeUtils.unescapeJava(...);
The actual writing to accumulo looks like this:
Mutation mut = new Mutation(new Text(checkSum));
Value val = new Value(new Text(filename).getBytes());
long timestamp = System.currentTimeMillis();
mut.put(new Text(colFam), new Text(EMPTY_BYTES), timestamp, val);
but nothing unusual showed up there (perhaps \x00 isn't escaped)? But then if I do a scan on my table in accumulo, there will be one or more \x00 in the file name.
The problem this seems to cause is that I return that string within XML when I retrieve a list of files (where it shows up) and pass that back to the browser, the the XSL that is supposed to render the information in the XML no longer works when there's these extra characters (not sure why that is the case either).
In chrome, for the response on these calls, I see that there's three red dots after the file name, and when I hover over it, \u0 pops up (which I think is a different representation of 0/null?).
Anyway, I'm just trying to figure out why this happens, or at the very least, how I can filter out \x00 characters before returning the file in Java. any ideas?
You are likely incorrectly using the Hadoop Text class -- this is not an error with Accumulo. Specifically, you make the mistake in your above example:
Value val = new Value(new Text(filename).getBytes());
You must adhere to the length of provided by the Text class. See the Text javadoc for more information. If you're using Hadoop-2.2.0, you can use the provided copyBytes method on Text. If you're on older version of Hadoop where this method doesn't yet exist, you can use something like the ByteBuffer class or the System.arraycopy method to get a copy of the byte[] with the proper limits enforced.
I'm creating an Excel file where, once created and downloaded, a user isn't allowed to let empty cells in a specific column (because he will send it again with information he entered).
I'm using POI HSSFDataValidation with setEmptyCellAllowed(false).
But when the user downloads the file, he still can leave empty cells (after writing some text and deleting it).
Any suggestions?
Here's my code:
HSSFDataValidation dv = new HSSFDataValidation();
dv.setFirstColumn((short)19);
dv.setLastColumn((short)19);
dv.setFirstRow((short)4);
dv.setLastRow((short)24);
dv.setDataValidationType(DVConstraint.ValidationType.INTEGER);
dv.setOperator(DVConstraint.OperatorType.BETWEEN);
dv.setDataValidationType(HSSFDataValidation.DATA_TYPE_INTEGER);
dv.setOperator(HSSFDataValidation.OPERATOR_BETWEEN);
dv.setFirstFormula("0");
dv.setSecondFormula("1000");
//dv.setEmptyCellAllowed(true);
dv.setEmptyCellAllowed(false);
dv.setShowPromptBox(true);
dv.setSurppressDropDownArrow(false);
dv.setErrorStyle(HSSFDataValidation.ERROR_STYLE_STOP);
//dv.createErrorBox("", "");
//dv.createPromptBox("", "");
sheet.addValidationData(dv);
this is not answering your question directly but you can use a try and catch block to write something in the cell yourself if the user has kept it blank: