I am using Google SpreadSheet API in Java to read and update a Spreadsheet. Let's say I have few columns named A and B.
A contains a formula.
B is pure text.
When I cycle through the SpreadSheet to update some specific rows:
URL listFeedUrl = new URI(worksheet.getListFeedUrl().toString() + "?sq=somefield=" + URLEncoder.encode("\"" + somevalue+ "\"").toString()).toURL();
ListFeed listFeed = service.getFeed(listFeedUrl, ListFeed.class);
for (ListEntry row : listFeed.getEntries()) {
if (something.compareTo(somethingelse) == 0) {
row.getCustomElements().setValueLocal("B", request.getParameter("B"));
row.update();
}
}
The formula in the column A is lost. Only the result of the formula is kept. I guess it has something to do with the update() method but it looks like every row that I read looses the formula, not just the one where I am executing the update. What can I do in order to preserve the formula? I am not even reading/editing that cell... Thanks.
EDIT: As indicated in the documentation the list-based feed does not handle formulas but the thing is, I am not reading and/or modifying the cell that contains a formula...
You could try saving the formula in an array first, then assigning it back to the sheet.
function myFunction() {
sheet = SpreadsheetApp.getActiveSpreadsheet().getSheets()[0];
range = 'A1:A5';
myFormulas = sheet.getRange(range).getFormulas();
sheet.getRange(range).setFormulas(myFormulas);
}
Related
My regular procedure when coming to the task on getting dimensions of a csv file as following:
Get how many rows it has:
I use a while loop to read every lines and count up through each successful read. The cons is that it takes time to read the whole file just to count how many rows it has.
then get how many columns it has:
I use String[] temp = lineOfText.split(","); and then take the size of temp.
Is there any smarter method? Like:
file1 = read.csv;
xDimention = file1.xDimention;
yDimention = file1.yDimention;
I guess it depends on how regular the structure is, and whether you need an exact answer or not.
I could imagine looking at the first few rows (or randomly skipping through the file), and then dividing the file size by average row size to determine a rough row count.
If you control how these files get written, you could potentially tag them or add a metadata file next to them containing row counts.
Strictly speaking, the way you're splitting the line doesn't cover all possible cases. "hello, world", 4, 5 should read as having 3 columns, not 4.
Your approach won't work with multi-line values (you'll get an invalid number of rows) and quoted values that might happen to contain the deliminter (you'll get an invalid number of columns).
You should use a CSV parser such as the one provided by univocity-parsers.
Using the uniVocity CSV parser, that fastest way to determine the dimensions would be with the following code. It parses a 150MB file to give its dimensions in 1.2 seconds:
// Let's create our own RowProcessor to analyze the rows
static class CsvDimension extends AbstractRowProcessor {
int lastColumn = -1;
long rowCount = 0;
#Override
public void rowProcessed(String[] row, ParsingContext context) {
rowCount++;
if (lastColumn < row.length) {
lastColumn = row.length;
}
}
}
public static void main(String... args) throws FileNotFoundException {
// let's measure the time roughly
long start = System.currentTimeMillis();
//Creates an instance of our own custom RowProcessor, defined above.
CsvDimension myDimensionProcessor = new CsvDimension();
CsvParserSettings settings = new CsvParserSettings();
//This tells the parser that no row should have more than 2,000,000 columns
settings.setMaxColumns(2000000);
//Here you can select the column indexes you are interested in reading.
//The parser will return values for the columns you selected, in the order you defined
//By selecting no indexes here, no String objects will be created
settings.selectIndexes(/*nothing here*/);
//When you select indexes, the columns are reordered so they come in the order you defined.
//By disabling column reordering, you will get the original row, with nulls in the columns you didn't select
settings.setColumnReorderingEnabled(false);
//We instruct the parser to send all rows parsed to your custom RowProcessor.
settings.setRowProcessor(myDimensionProcessor);
//Finally, we create a parser
CsvParser parser = new CsvParser(settings);
//And parse! All rows are sent to your custom RowProcessor (CsvDimension)
//I'm using a 150MB CSV file with 1.3 million rows.
parser.parse(new FileReader(new File("c:/tmp/worldcitiespop.txt")));
//Nothing else to do. The parser closes the input and does everything for you safely. Let's just get the results:
System.out.println("Columns: " + myDimensionProcessor.lastColumn);
System.out.println("Rows: " + myDimensionProcessor.rowCount);
System.out.println("Time taken: " + (System.currentTimeMillis() - start) + " ms");
}
The output will be:
Columns: 7
Rows: 3173959
Time taken: 1279 ms
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).
IMO, What you are doing is an acceptable way to do it. But here are some ways you could make it faster:
Rather than reading lines, which creates a new String Object for each line, just use String.indexOf to find the bounds of your lines
Rather than using line.split, again use indexOf to count the number of commas
Multithreading
I guess here are the options which will depend on how you use the data:
Store dimensions of your csv file when writing the file (in the first row or as in an additional file)
Use a more efficient way to count lines - maybe http://docs.oracle.com/javase/6/docs/api/java/io/LineNumberReader.html
Instead of creating an arrays of fixed size (assuming thats what you need the line count for) use array lists - this may or may not be more efficient depending on size of file.
To find number of rows you have to read the whole file. There is nothing you can do here. However your method of finding number of cols is a bit inefficient. Instead of split just count how many times "," appeard in the line. You might also include here special condition about fields put in the quotas as mentioned by #Vlad.
String.split method creates an array of strings as a result and splits using regexp which is not very efficient.
I find this short but interesting solution here:
https://stackoverflow.com/a/5342096/4082824
LineNumberReader lnr = new LineNumberReader(new FileReader(new File("File1")));
lnr.skip(Long.MAX_VALUE);
System.out.println(lnr.getLineNumber() + 1); //Add 1 because line index starts at 0
lnr.close();
My solution is simply and correctly process CSV with multiline cells or quoted values.
for example We have csv-file:
1,"""2""","""111,222""","""234;222""","""""","1
2
3"
2,"""2""","""111,222""","""234;222""","""""","2
3"
3,"""5""","""1112""","""10;2""","""""","1
2"
And my solution snippet is:
import java.io.*;
public class CsvDimension {
public void parse(Reader reader) throws IOException {
long cells = 0;
int lines = 0;
int c;
boolean qouted = false;
while ((c = reader.read()) != -1) {
if (c == '"') {
qouted = !qouted;
}
if (!qouted) {
if (c == '\n') {
lines++;
cells++;
}
if (c == ',') {
cells++;
}
}
}
System.out.printf("lines : %d\n cells %d\n cols: %d\n", lines, cells, cells / lines);
reader.close();
}
public static void main(String args[]) throws IOException {
new CsvDimension().parse(new BufferedReader(new FileReader(new File("test.csv"))));
}
}
I am currently working on creating an app in google sheets from code I wrote in excel VBA. There are 2 sheets, one contains user submitted info. This info is dates and it populates row by row. There are 8 columns of dates (8 entries per row). The other sheet finds the difference in the dates (i.e. days in between), calculates and calculates an average.
I am having trouble with the script language and syntax but here is what I have so far:
function calcValues() {
//var ss = SpreadsheetApp.getActiveSheet();
//ss.insertRowAfter(ss.getLastRow());
//ss.getRange(ss.getLastRow(),1,1,ss.getLastColumn()).copyTo(ss.getRange(ss.getLastRow()+1,1));
//spreadsheet variable declared
var ss = SpreadsheetApp.getActiveSpreadsheet();
//FormResponses sheet declared as variable
var sFR = ss.setActiveSheet(ss.getSheetByName("FormResponses"));
//Last row on form response sheet
var last = sFR.getLastRow();
//Declare calcData sheet
var sCD = ss.setActiveSheet(ss.getSheetByName("CalcData"));
//Loop to write values to CalcData sheet
for( var i = 2; i < last; i++)
if(sFR.getRange(i,1) !== 0){
sCD.getRange(3, 1, 1, 8).copyTo(sCD.getRange(i+2,1))
}
}
function FormSubmitUpdate() {
//spreadsheet variable declared
var ss = SpreadsheetApp.getActiveSpreadsheet();
//FormResponses sheet declared as variable
var sFR = ss.setActiveSheet(ss.getSheetByName("FormResponses"));
//Last row on form response sheet
var last = sFR.getLastRow();
//Declare calcData sheet
var sCD = ss.setActiveSheet(ss.getSheetByName("CalcData"));
//sCD.getRange("B4").setValue(sFR.getRange("C3").getValue()- sFR.getRange("B3").getValue())
//sCD.insertRowAfter(sCD.getLastRow());
sCD.getRange(sCD.getLastRow(),1,1,sCD.getLastColumn()).copyTo(sCD.getRange(sCD.getLastRow()+1,1));
I currently have the formulas set in the cells and this code simply copies the formula down. calcValues counts how many user inputs there are and performs that many copies.
formSubmitUpdate copies the previous row down one row.
When I attempt to perform the subtraction of dates in the code I get a really strange number. I did some research and that is because of the millisecond date transformation.
I would like the subtraction and average to be found using code and not formulas in the sheet. I would also like it to update when a new userform is submitted.
If you can give me some code samples that will do what I am asking I would much appreciate it. I am a VBA guy and I would really like to learn this platform.
Thank you.
I have an excel file with 3000 rows. I remove the 2000 (with ms excel app), but when i call the sheet.getLastRowNum() from code , it gives me 3000 (instead of 1000).. How can i remove the blank rows?
I tried the code from here but it doesn't works....
There are two ways for it:
1.) Without code:
Copy the content of your excel and paste it in a new excel, and later rename is as required.
2.) With code(I did not find any functions for it so I created my own function):
You need to check each of the cells for any type of blank/empty string/null kind of things.
Before processing the row(I am expecting you are processing row wise also I am using org.apache.poi.xssf.usermodel.XSSFRow), put a if check, and check for this method's return type in the if(condition), if it is true that means the row(XSSFRow) has some value other wise move the iterator to next row
public boolean containsValue(XSSFRow row, int fcell, int lcell)
{
boolean flag = false;
for (int i = fcell; i < lcell; i++) {
if (StringUtils.isEmpty(String.valueOf(row.getCell(i))) == true ||
StringUtils.isWhitespace(String.valueOf(row.getCell(i))) == true ||
StringUtils.isBlank(String.valueOf(row.getCell(i))) == true ||
String.valueOf(row.getCell(i)).length() == 0 ||
row.getCell(i) == null) {}
else {
flag = true;
}
}
return flag;
}
So finally your processing method will look like
.
.
.
int fcell = row.getFirstCellNum();// first cell number of excel
int lcell = row.getLastCellNum(); //last cell number of excel
while (rows.hasNext()) {
row = (XSSFRow) rows.next();//increment the row iterator
if(containsValue(row, fcell, lcell) == true){
.
.
..//processing
.
.
}
}
Hope this will help. :)
I haven't found any solution on how to easily get the "real" number of rows but I've found a solution to remove such rows which might be useful to someone who's tackling similar issue. See bellow.
I've searched a bit and found this solution
All it does is it deletes those empty rows from the bottom which might be exactly what you want.
As per my understanding for deleting rows you Must have selected all the cells and pressed Delete button. If I am right then you have deleted the rows by wrong way. By this way the cells become blank not deleted so the rows actually contain cells with blank values and that is why get included in the row count.
The correct way to do this is select the row from the left of its first cell where row numbers are appearing. Clicking there on row numbers will select the complete row. Select all required rows with the help of shift key. Now right click and then select delete.
This may be helpful for you.
remove rows/columns by poi api
transfer xls to csv
transfer csv to xls
hope this will help you
I'm opening a Excel (xls) file in my Java Application with POI.
There are 30 Lines in this Excelfile.
I need to get the Value at ColumnIndex 9.
My code:
Workbook wb;
wb = WorkbookFactory.create(inp);
Sheet sheet = wb.getSheetAt(0);
for (Row row : sheet) {
if (row.getLastCellNum() >= 6) {
for (Cell cell : row) {
if(cell.getColumnIndex == 9) {
//do something
}
}
}
}
Every Row in Excel has Values in Columns 1-14.
My problem is, only some Values are recognized. I wrote the same value in every cell in ColumnIndex 9 (10th Column in my Excel sheet), but the Problem is still the same.
What could cause this problem?
Make sure you set the same Date format for all cells in column (select column and set format explicity) And i belive using DataUtil class to get data is more appropriate, than call cell.getDateCellValue().
POI uses 0 based counting for columns. So, if you want the 9th Column, you need to fetch the cell with index 8, not 9. It looks like you're checking for column with index 9, so are one column out.
If you're not sure about 0 based indexing, then the safest thing is to use the CellReference class to help you. This will translate between Excel style references, eg A1, and POI style 0-based offsets eg 0,0. Use something like:
CellReference ref = new CellReference("I10");
Row r = sheet.getRow(ref.getRow());
if (r == null) {
// That row is empty
} else {
Cell c = r.getCell(ref.getCol());
// c is now the cell at I10
}
Seems to be a Problem with the excel document(s).
Converting them to csv and then back to xls solves the problem.
I have a table similar to the following:
Part #
Price
Status
1st Part #
$1.00
OK
2nd Part #
$2.00
Discontinued
Nth Part #
$N.00
Reordered
My java code will be looking for the status of "Nth Part #" where I have no idea how big the table is, how many columns it has, and no idea what N is (until run time). In Ruby/WATIR, I would have used the table's id to grab it's HTML, and then used Ruby to iterate over the rows until the part # matched, and then check that row's corresponding status in the Status column (whichever column that might be, but it's set in the hd header's row).
Selenium's standard table lookup function selenium.getTable("table.1.2") only works for static tables that contain the same contents for each test. The overkill selenium.get_html_source is a waste since selenium knows how to find the table already, plus then I have to parse the entire web page.
Any ideas on how I can grab the html of the table, and what would be the best way to iterate over the rows and/or columns?
Thanks in advance.
The easiest thing to do would be to use getTable like this
selenium.getTable("table." + (1 + n) + ".3")
to get the "Status" cell for the nth row if you know what n will be at runtime.
If you are trying to iterate over all of the rows in the table, you could do something like this
try {
for(int n = 1; true; n++) {
String cellContents = selenium.getTable("table." + n + ".3");
//do something with n
}
}
catch {
//handle end of table
}
or, alternatively
final int rowCount = (int)selenium.getXPathCount("id('table')/tbody/tr");
for(int n = 1; n < rowCount; n++) {
String cellContents = selenium.getTable("table." + n + ".3");
}
Remember that in getTable(locator.row.column), row and column start at 1.
Not exactly what you're asking for, but I solved a similar problem by assigning the unique id (part number it sounds like in your case) to be the html id of the tr. Then I used the Selenium xpath locators to get the row and columns I needed for my test.