Changing cells in sheet using Java and Smartsheet API - java

I have a not big, but unsolvable for me problem. My Java code connects with smartsheet, and reads a list of sheets at folder (by folder id), next, it copies from folder ony sheets (leaving reports and dashboards), changing name of sheets and paste them at another folder. I'm using Correto 17.
How can I change a value of cell at any sheet by code. I need change only 1 cell for each running code, so it not should be a genius loop. I want to understand the basics of this.
Below I added 2 classes at part after copying and renaming. Everything is working perfect before I trying to change cell. If it will help: cell is unlocked, valueType text_number, not a formula, just text, but in sheet are other cells with values, they are with formulas / they are locked).
I can read a value at cell but can't set it.
public void projectFolderCreate(String projectID_Value) throws SmartsheetException {
Smartsheet smartsheet = SmartsheetFactory.createDefaultClient(API);
System.out.println("Folder working. Loading folder: " + getFolderID());
Folder workFolder = smartsheet.folderResources().getFolder(getFolderID(), null);
for (var element : workFolder.getSheets()) {
System.out.println(element.getId());
ContainerDestination containerDestination = new ContainerDestination().setDestinationType(DestinationType.FOLDER).setDestinationId(8775808614459268L).setNewName(element.getName().replace("Project_Name", projectID_Value));
Sheet sheet = smartsheet.sheetResources().copySheet(element.getId(),containerDestination,EnumSet.of(SheetCopyInclusion.DATA,SheetCopyInclusion.CELLLINKS));
if (sheet.getName().endsWith("Metadata")) {
Sheet newSheet = smartsheet.sheetResources().getSheet(sheet.getId(),null,null,null,null,null,null,null);
long newSheetId = newSheet.getId();
System.out.println("ID of the New Sheet : " + newSheetId);
Column projIDcolumn = newSheet.getColumns().get(1);
System.out.println(projIDcolumn.getTitle());
Row firstRow = newSheet.getRows().get(2);
firstRow.getCells().get(2).setValue(new CellDataItem().setObjectValue(projectID_Value));
firstRow.getColumnByIndex(projIDcolumn.getIndex());
Cell s = newSheet.getRows().get(2).getCells().get(1).setDisplayValue(projectID_Value);
}System.out.println(sheet.getName() + " was created");
}
}
ProjectFolderWorking(String projectID) throws SQLException, SmartsheetException {
String folderID;
LikeAWall law = new LikeAWall(1);
DataBase db = new DataBase(law.getPickDB());
db.setApi();
SmartIM smartIM = new SmartIM(db.getApi());
folderID = String.valueOf(SmartIM.getFolderID());
System.out.println("FolderID: " + folderID);
setProjectID(projectID);
smartIM.projectFolderCreate(getProjectID());
}
I'm sure that linking to variable was good, because I tried getters, but setters of this aren't working but compiling without any exceptions / errors / bugs / problems and of coarse result.
Below are the methods I tried.
smartsheet.sheetResources().getSheet(sheet.getId(),null,null,null,null,null,null,null).getRows().get(2).getCells().set(1,null);
smartsheet.sheetResources().getSheet(sheet.getId(),null,null,null,null,null,null,null).getRows().get(2).getCells().get(1).setDisplayValue(projectID_Value)
smartsheet.sheetResources().getSheet(sheet.getId(),null,null,null,null,null,null,null).getRows().get(2).getCells().get(1).setValue(projectID_Value.toString) / and getBytes()
smartsheet.sheetResources().getSheet(sheet.getId(),null,null,null,null,null,null,null).getRows().get(2).getCells().get(1).setObjectValue()
This is didn't make sense, but I just broke. I tried do every above with parts as:
Sheet sh = lallalalla;
Column s = sh.lalallalala;
Rows w = sh.lalallala;
Cell = w.getCells().get(lalallaa);

Related

Colum headers not visible in GridExporter add-on for Vaadin 23

hi i installed export grid for vaadin 23 the add-on is ok except when i exported in any format not insert the colunm name in first row only blank one for a column. I used a persistence Tuple class in grid to handler a generic query in table.
below an example
private Grid<Tuple> grd_report;
private void buildReport(ReportEntity re)
{
rg=new ReportGen(re);
List<Tuple> rows = rg.generate();
List<TupleElement<?>> elements = rows.get(0).getElements();
grd_report.removeAllColumns();
// HeaderRow headerRow;
if(grd_report.getHeaderRows().size()>0)
grd_report.getHeaderRows().clear();
else
grd_report.appendHeaderRow();
HeaderRow headerRow = grd_report.getHeaderRows().get(0);
for ( int idxCol = 0; idxCol < elements.size(); idxCol++ )
{
Integer xx=idxCol;
String ColumName=elements.get(idxCol).getAlias();
Grid.Column<Tuple> Column = grd_report.addColumn(te->te.get(xx)).setHeader(ColumName).setSortable(true).setKey(ColumName);
if(idxCol==0)
Column.setFooter("Total:" + rows.size());
headerRow.getCell(Column).setComponent(createFilterHeader(ColumName));
Column.setResizable(true);
}
grd_report.setItems(rows);
grd_report.setPageSize(30);
}
private void exportFile(ComboItem ci)
{
Anchor download=null;
exporter = GridExporter.createFor(grd_report);
exporter.setTitle(cmb_reports.getValue().getName());
exporter.setFileName("Export_" + new SimpleDateFormat("yyyyddMM").format(Calendar.getInstance().getTime()));
exporter.setAutoAttachExportButtons(false);
can you help me?
Massimiliano
can you resolved then problem
The add-on does not support component headers:
Grid<String> grid = new Grid<>();
Column<String> col1 = grid.addColumn(x -> x).setHeader("Has text header");
Column<String> col2 = grid.addColumn(x -> x).setHeader(new Span("Text header"));
System.out.println(GridHelper.getHeader(grid, col1));
System.out.println(GridHelper.getHeader(grid, col2)); //prints an empty string
Since Vaadin does not provide an API for retrieving the header (see edit), we used GridHelpers, which implements a workaround that is only able to retrieve string headers:
use a lot of reflection to dig out Renderer from the column, and then Template from the Renderer.
That is implemented here. GridExporter just delegates into it (source)
protected List<Pair<String,Column<T>>> getGridHeaders(Grid<T> grid) {
return exporter.columns.stream().map(column -> ImmutablePair.of(GridHelper.getHeader(grid,column),column))
.collect(Collectors.toList());
}
GridExporter does not (currently) allow setting an "export header" independently from the Grid header.
I created an enhancement request for that.
==EDIT==
Vaadin 23.2 does provide column.getHeaderComponent(), and it's also possible to do getHeaderComponent().getElement().getTextRecursively(), but in most cases it will not be enough, thus the need for a custom export header still stands.

The change of the average according to the last row in Excel ,

I want to add SQL Query result to the created Excel File. And as a result, I need to average some columns at the bottom. My question is;
The number of rows is changing in some queries. How can I shift the average result by the last row according to the changing row?
example
after writing to excel
I'm learning now to add results using Postgresql to an Excel file created using apache poi.
Resource resource = resourceLoader.getResource("classpath:/temps/" + querySelected.getTemplateName());
workbook = new XSSFWorkbook(resource.getInputStream());
XSSFSheet sheetTable1 = workbook.getSheet("Table1");
int rowCounter = 1;
for (tname:tname) {
Row values = sheetTable1.createRow(rowCounter);
rowCounter++;}
cell = values.createCell(0, CellType.NUMERIC);
cell.setCellValue(knz.gettablename().doubleValue());}

Loading Excel files in Java takes too much time

I would like to load an Excel file into a Java program, parse it and insert the necessary things into a database every day, but don't want to load the whole file every time when I run the program. I need to get last 90 rows only. Is it possible to load an Excel (XLSM) file partially in Java (not necessary but preferred, can be another programing language too) to decrease loading time?
It takes around 60-70 seconds, and loading Excel takes 35 seconds, Excel file has 4000 rows and rows has 900 columns.
try{
workbook = WorkbookFactory.create(new FileInputStream(file));
sheet = workbook.getSheetAt(0);
rowSize=sheet.getLastRowNum();
myWriter = new FileWriter("/Users/mykyusuf/Desktop/filename.txt");
Row malzeme=sheet.getRow(1);
Row kaynak=sheet.getRow(2);
Row endeks=sheet.getRow(3);
myWriter.write("insert all\n");
Row row=sheet.getRow(rowSize-1);
for (int i = 4; i < rowSize-1; i++) {
row = sheet.getRow(i);
for (Cell cell : row) {
if (cell.getColumnIndex()>3) {
myWriter.write("into piyasa_takip (tarih,malzeme,kaynak,endeks,deger) values (to_date(\'" + row.getCell(3).getLocalDateTimeCellValue().toLocalDate() + "\','YYYY-MM-DD'),\'" + malzeme.getCell(cell.getColumnIndex()) + "\',\'" + kaynak.getCell(cell.getColumnIndex()) + "\',\'" + endeks.getCell(cell.getColumnIndex()) + "\',\'" + cell + "\')\n");
}
}
}
row = sheet.getRow(rowSize-1);
for (Cell cell : row) {
if (cell.getColumnIndex()>3 ) {
myWriter.write("into piyasa_takip (tarih,malzeme,kaynak,endeks,deger) values (to_date(\'" + row.getCell(3).getLocalDateTimeCellValue().toLocalDate() + "\','YYYY-MM-DD'),\'" + malzeme.getCell(cell.getColumnIndex()) + "\',\'" + kaynak.getCell(cell.getColumnIndex()) + "\',\'" + endeks.getCell(cell.getColumnIndex()) + "\',\'" + cell + "\')\n");
}
}
myWriter.write(" Select * from DUAL\n");
myWriter.close();
}
I do not know a simple answer to your question, but I want to help you figure it out
Exist two substantially different formats: *.XLS (old) and *.XLSX (new). In common case, new format more compact (because use zipping as part of "container").
I don't know simple way for "cut" last 90 rows from excel file. Especially, excel have a complicated format with tabs, formulas and hyperlinks (and scripts :-) ) in document.
But, we can use "divide and rule" principle. If you have a big excel file locally and this file wery slow loading on remote host, you can process fiel locally (for extracnt only new reccords in other file) and load to remote host this "modifications" only.
Thus, you divide the task into two parts: super-simple processing of a large file locally (to highlight the changed part) and normal and smart processing on a remote host.
Maybe this will help you?
Maybe you can try to use Free Spire.Xls to solve this.
I choose some data (70 rows and 8 columns ). It costs me 1-2 seconds to read them.
Hope it can help you to save some time.
And codes are right below:
import com.spire.xls.Workbook;
import com.spire.xls.Worksheet;
public class GetCellRange {
public static void main(String[] args) {
//Load the sample document
Workbook workbook = new Workbook();
workbook.loadFromFile("sample.xlsx");
//Get the first worksheet
Worksheet worksheet = workbook.getWorksheets().get(0);
//Choose the output content
for (int row = 1; row <= 70 ; row++) {
for (int col = 1; col <= 8 ; col++) {
System.out.println(worksheet.getCellRange(row,col).getValue() + "\t");
}
System.out.println("\n");
}
}
}

How do I save a retrieve specific data from a csv file without headers in java?

I am writing an application which needs to load a large csv file that is pure data and doesn't contain any headers.
I am using a fastCSV library to parse the file, however the data needs to be stored and specific fields need to be retrieved. Since the entire data is not necessary I am skipping every third line.
Is there a way to set the headers after the file has been parsed and save it in a data structure such as an ArrayList?
Here is the function which loads the file:
public void fastCsv(String filePath) {
File file = new File(filePath);
CsvReader csvReader = new CsvReader();
int linecounter = 1;
try (CsvParser csvParser = csvReader.parse(file, StandardCharsets.UTF_8)) {
CsvRow row;
while ((row = csvParser.nextRow()) != null) {
if ((linecounter % 3) > 0 ) {
// System.out.println("Read line: " + row);
//System.out.println("First column of line: " + row.getField(0));
System.out.println(row);
}
linecounter ++;
}
System.out.println("Execution Time in ms: " + elapsedTime);
csvParser.close();
} catch (IOException e) {
e.printStackTrace();
}
}
Any insight would be greatly appreciated.
univocity-parsers supports field selection and can do this very easily. It's also faster than the library you are using.
Here's how you can use it to select columns of interest:
Input
String input = "X, X2, Symbol, Date, Open, High, Low, Close, Volume\n" +
" 5, 9, AAPL, 01-Jan-2015, 110.38, 110.38, 110.38, 110.38, 0\n" +
" 2710, 289, AAPL, 01-Jan-2015, 110.38, 110.38, 110.38, 110.38, 0\n" +
" 5415, 6500, AAPL, 02-Jan-2015, 111.39, 111.44, 107.35, 109.33, 53204600";
Configure
CsvParserSettings settings = new CsvParserSettings(); //many options here, check the tutorial
settings.setHeaderExtractionEnabled(true); //tells the parser to use the first row as the header row
settings.selectFields("X", "X2"); //selects the fields
Parse and print results
CsvParser parser = new CsvParser(settings);
for(String[] row : parser.iterate(new StringReader(input))){
System.out.println(Arrays.toString(row));
}
}
Output
[5, 9]
[2710, 289]
[5415, 6500]
On the field selection, you can use any sequence of fields, and have rows with different column sizes, and the parser will handle this just fine. No need to write complex logic to handle that.
The process the File in your code, change the example above to do this:
for(String[] row : parser.iterate(new File(filePath))){
... //your logic goes here.
}
If you want a more usable record (with typed values), use this instead:
for(Record record : parser.iterateRecords(new File(filePath))){
... //your logic goes here.
}
Speeding up
The fastest way of processing the file is through a RowProcessor. That's a callback that received the rows parsed from the input:
settings.setProcessor(new AbstractRowProcessor() {
#Override
public void rowProcessed(String[] row, ParsingContext context) {
System.out.println(Arrays.toString(row));
context.skipLines(3); //use the context object to control the parser
}
});
CsvParser parser = new CsvParser(settings);
//`parse` doesn't return anything. Rows go to the `rowProcessed` method.
parser.parse(new StringReader(input));
You should be able to parse very large files pretty quickly. If things are slowing down look in your code (avoid adding values to lists or collections in memory, or at least pre-allocate the collections to a good size, and give the JVM a large amount of memory to work with using Xms and Xmx flags).
Right now this parser is the fastest you can find. I made this performance comparison a while ago you can use for reference.
Hope this helps
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license)
Do you know which fields/columns you want to keep, and what you'd like the "header" value to be ? , ie you want columns the first and third columns and you want them called "first" and "third" ? If so, you could build a HashMap of string/objects (or other appropriate type, depends on your actual data and needs), and add the HashMap to an ArrayList - this should get you going, just be sure to change the HashMap types as needed
ArrayList<HashMap<String,String>> arr=new ArrayList<>();
HashMap<String,String> hm=new HashMap<>();
while ((row = csvParser.nextRow()) != null) {
if ((linecounter % 3) > 0 ) {
// System.out.println("Read line: " + row);
//System.out.println("First column of line: " + row.getField(0));
// keep col1 and col3
hm.clear();
hm.put("first",row.getField(0));
hm.put("third",row.getField(2));
arr.add(hm);
}
linecounter ++;
}
If you want to capture all columns, you can use a similar technique but I'd build a mapping data structure so that you can match field indexes to column header names in a loop to add each column to the HashMap that is then stored in the ArrayList

Inserting data with UCanAccess from big text files is very slow

I'm trying to read text files .txt with more than 10.000 lines per file, splitting them and inserting the data in Access database using Java and UCanAccess. The problem is that it becomes slower and slower every time (as the database gets bigger).
Now after reading 7 text files and inserting them into database, it would take the project more than 20 minutes to read another file.
I tried to do just the reading and it works fine, so the problem is the actual inserting into database.
N.B: This is my first time using UCanAccess with Java because I found that the JDBC-ODBC Bridge is no longer available. Any suggestions for an alternative solution would also be appreciated.
If your current task is simply to import a large amount of data from text files straight into the database, and it does not require any sophisticated SQL manipulations, then you might consider using the Jackcess API directly. For example, to import a CSV file you could do something like this:
String csvFileSpec = "C:/Users/Gord/Desktop/BookData.csv";
String dbFileSpec = "C:/Users/Public/JackcessTest.accdb";
String tableName = "Book";
try (Database db = new DatabaseBuilder()
.setFile(new File(dbFileSpec))
.setAutoSync(false)
.open()) {
new ImportUtil.Builder(db, tableName)
.setDelimiter(",")
.setUseExistingTable(true)
.setHeader(false)
.importFile(new File(csvFileSpec));
// this is a try-with-resources block,
// so db.close() happens automatically
}
Or, if you need to manually parse each line of input, insert a row, and retrieve the AutoNumber value for the new row, then the code would be more like this:
String dbFileSpec = "C:/Users/Public/JackcessTest.accdb";
String tableName = "Book";
try (Database db = new DatabaseBuilder()
.setFile(new File(dbFileSpec))
.setAutoSync(false)
.open()) {
// sample data (e.g., from parsing of an input line)
String title = "So, Anyway";
String author = "Cleese, John";
Table tbl = db.getTable(tableName);
Object[] rowData = tbl.addRow(Column.AUTO_NUMBER, title, author);
int newId = (int)rowData[0]; // retrieve generated AutoNumber
System.out.printf("row inserted with ID = %d%n", newId);
// this is a try-with-resources block,
// so db.close() happens automatically
}
To update an existing row based on its primary key, the code would be
Table tbl = db.getTable(tableName);
Row row = CursorBuilder.findRowByPrimaryKey(tbl, 3); // i.e., ID = 3
if (row != null) {
// Note: column names are case-sensitive
row.put("Title", "The New Title For This Book");
tbl.updateRow(row);
}
Note that for maximum speed I used .setAutoSync(false) when opening the Database, but bear in mind that disabling AutoSync does increase the chance of leaving the Access database file in a damaged (and possibly unusable) state if the application terminates abnormally while performing the updates.
Also, if you need to use slq/ucanaccess, you have to call setAutocommit(false) on the connection at the begin, and do a commit each 200/300 record. The performances will improve drammatically (about 99%).

Categories

Resources