While writing the excel file is fine I see that really long numbers are scientific notations in excel
Example: 8.71129E+12
instead of: 1234567890
How can I do it in Java
I am writing like
String nart = "1236547865452";
csvWriter.append(nart);
Not sure how the same can be achieved using writing to simple CSV? ,
but through APACHE POI You can write to excel and you need to do set the cell type if you want to see the whole value like 1236547865452 cell.setCellType(HSSFCell.CELL_TYPE_NUMERIC); and this is not needed it you have to see scientific notation as 8.71129E+12
Related
I have a very long string in a text file.It is basically the below string repeated around 1000 times (as one long string, not 1000 strings).The string has variables which change with each repetition (those in bold).I'd like to extract the variables in an automated way, and return the output into either a CSV or formatted txt file (Random Bank, Random Rate, Random Product)I can do this successfully using https://regex101.com, however it involves a lot of manual copy&paste.I'd like to write a bash script to automate extracting the information, but have no luck in attempting various grep commands.How can this be done? (I'd also consider doing in Java).
[{"AccountName":"Random Product","AccountType":"Variable","AccountTypeId":1,"AER":Random Rate,"CanManageByMobileApp":false,"CanManageByPost":true,"CanManageByTelephone":true,"CanManageInBranch":false,"CanManageOnline":true,"CanOpenByMobileApp":false,"CanOpenByPost":false,"CanOpenByTelephone":false,"CanOpenInBranch":false,"CanOpenOnline":true,"Company":"Random Bank","Id":"S9701Monthly","InterestPaidFrequency":"Monthly"
This is JSON formatted data which you can't parse with regular expression engines. Get a JSON parser. If this file is larger than, say, 1GB, find one that lets you 'stream' (which is the term for parsing it and dealing with the data as it parses, vs the more usual route which turns the entire input into an object; if the file is huge, that object'd be huge, might run out of memory - hence you'd need the streaming aspect).
Here is one tutorial for Jackson-streaming.
So I'm having some issues getting Apache POI to evaluate formulas.
Here's the code I call to evaluate formulas before writing:
complete.getCreationHelper().createFormulaEvaluator().evaluateAll();
complete.write(fileOut);
Here's the code I call to write to the cells being used (proving they're numbers):
try{
cell.setCellValue((Double)grid[i][j]);
}
catch(Exception e){
cell.setCellValue((String)grid[i][j]);
}
FYI: grid is a 2D Object array containing only entries of the type double and String.
Here's the formulas I'm trying to evaluate:
"=G13 - H13"
"=STDEV.P(C1:L1)"
"=I13/G13"
Any ideas why when I open up my final workbook in Excel the formulas arn't evaluated? Also, when I click on an unevaluated field and hit enter Excel will recognize the formula and evaluate it. In bulk this isn't practical, but I believe it demonstrates that the cells being used are the correct type. Could this be related to the formulas being of the String type?
EDIT:
OK, so looks like you're supposed to explicitly tell it you have a formula cell. Here's my modified code to do that:
try{
cell.setCellValue((Double)grid[i][j]);
}
catch(Exception e){
String val = (String) grid[i][j];
if (val != null && val.startsWith("=")){
val = val.replaceAll("=", "");
cell.setCellType(XSSFCell.CELL_TYPE_FORMULA);
cell.setCellFormula(val);
}
else{
cell.setCellValue(val);
}
}
Unfortunately you need to remove the equals sign (which is dumb) to pass formulas and then force it to reevaluate before saving (which is dumb). After trying to get it to reevaluate formulas it complained, however, saying:
Caused by:
org.apache.poi.ss.formula.eval.NotImplementedFunctionException:
STDEV.P
I'm imagining that this means Excel has implemented standard deviation calculations but POI hasn't caught up yet?
Try this:
XSSFFormulaEvaluator.evaluateAllFormulaCells(workbook);
or, if you are using xls
HSSFFormulaEvaluator.evaluateAllFormulaCells(hssfWorkbook)
You probably want to call this just before saving.
I'm writing output to an excel file which is being written through ByteArrayOutputStream. I want to write test for comparing the expected ByteArrayOutputStream(whether the excel cell contains the data as I required) with the output ByteArrayOutputStream .Is there a way to compare two ByteArrayOutputStreams in Java ? Also how should I set my expected outputStream ?
You can compare their content by doing something like this:
Arrays.equals(byteArrayOutputStream1.toByteArray(), byteArrayOutputStream2.toByteArray());
You can set its content like any other OutputStream thanks to the write methods if the content is big, but If your expected value is small, the better approach will be to put the content into a String variable then do the next test instead of the previous one:
Arrays.equals(expectedContent.getBytes(myCharset), byteArrayOutputStream2.toByteArray());
This question already has answers here:
returning decimal instead of string (POI jar)
(3 answers)
Closed 7 years ago.
I am trying to develop a data driven Selenium automation framework in Java. I have written code to read input test data from excel sheet. The excel sheet contains two columns - Username and Password. I read the excel data using the following code.
String testData;
for(int j=1;j<currentRow.getPhysicalNumberOfCells();j++){
if(currentRow.getCell(j)!==null)
{
if(currentRow.getCell(j).getCellType()==Cell.CELL_TYPE_BOOLEAN){
testData=Boolean.toString(currentRow.getCell(j).getBooleanCellValue());
}
if(currentRow.getCell(j).getCellType()==Cell.CELL_TYPE_NUMERIC){
testData=Double.toString(currentRow.getCell(j).getNumericCellValue());
}
if(currentRow.getCell(j).getCellType()==Cell.CELL_TYPE_STRING){
testData=currentRow.getCell(j).getStringValue();
}
}
}
The problem is that if the password is 123, the above code will return the value 123.0 and hence the test case fails. I cannot remove the decimal point since if the actual password is 123.0, it would return the result 123. How can I read the excel data as it is given in the cell?
add cell.setCellType(Cell.CELL_TYPE_STRING); before starting to read.
EDIT : POI API Documentation says,
If what you want to do is get a String value for your numeric cell,
stop!. This is not the way to do it. Instead, for fetching the string
value of a numeric or boolean or date cell, use DataFormatter instead.
refer This answer for a better solution!
Read everything in string format and then compare
Task 1: Read each row from one csv file into one seprate txt file.
Task 2: Reverse: in one folder, read text from each txt file and put into a row in a single csv. So, read all txt files into one csv file.
How would you do this? Would Java or Python be good to get this task done in very quickly?
Update:
For Java, there are already some quite useful libraries you can use, for example opencsv or javacsv. But better have a look at wikipedia about csv if no knowledge on csv. And this post tells you all the possibilities in Java.
Note: Due to the simplicity of the question, some one pre-assume this is a homework. I hereby declare it is not.
More background: I am working on my own experiments on machine learning and setting up a large scale test set. I need crawl, scrape and file type transfer as the basic utility for the experiment. Building a lot of things by myself for now, and suddenly want to learn Python due to some recent discoveries and get the feeling Python is more concise than Java for many parsing and file handling situations. Hence got this question.
I just want to save time for both you and me by getting to the gist without stating the not-so-related background. And my questions is more about the second question "Java vs Python". Because I run into few lines of code of Python using some csv library (? not sure, that's why I asked), but just do not know how to use Python. That are all the reasons why I got this question. Thanks.
From what you write there is little need on using something specific for CSV files. In particular for Task 1, this is a pure data I/O operation on text files. In Python for instance:
for i,l in enumerate(open(the_file)):
f = open('new_file_%i.csv' % i, 'w')
f.write(l)
f.close()
For Task 2, if you can guarantee that each file has the same structure (same number of fields per row) it is again a pure data I/O operation:
# glob files
files = glob('file_*.csv')
target = open('combined.csv', 'w')
for f in files:
target.write(open(f).read())
target.write(new_line_speparator_for_your_platform)
target.close()
Whether you do this in Java or Python depends on the availability on the target system and your personal preference only.
In that case I would use python since it is often more concise than Java.
Plus, the CSV files are really easy to handle with Python without installing something. I don't know for Java.
Task 1
It would roughly be this based on an example from the official documentation:
import csv
with open('some.csv', 'r') as f:
reader = csv.reader(f)
rownumber = 0
for row in reader:
g=open("anyfile"+str(rownumber)+".txt","w")
g.write(row)
rownumber = rownumber + 1
g.close()
Task 2
f = open("csvfile.csv","w")
dirList=os.listdir(path)
for fname in dirList:
if fname[-4::] == ".txt":
g = open("fname")
for line in g: f.write(line)
g.close
f.close()
In python,
Task 1:
import csv
with open('file.csv', 'rb') as df:
reader = csv.reader(df)
for rownumber, row in enumerate(reader):
with open(''.join(str(rownumber),'.txt') as f:
f.write(row)
Task 2:
from glob import glob
with open('output.csv', 'wb') as output:
for f in glob('*.txt'):
with open(f) as myFile:
rows = myFile.readlines()
output.write(rows)
You will need to adjust these for your use cases.