I have an excel template with two sheets that I want to populate through XLSTransformer. The data are different for the two sheets (list of different lenghts, with one taking results of a table - see code below), meaning that I cannot pass them through one map.
I've tried with two maps :
//for the first excel sheet
Map<String, List<ListData>> beanParams = new HashMap<String, List<ListData>>();
beanParams.put("rs", rs);
//for the second one
Map<String, List<DetailResult>> beanParams2 = new HashMap<String, List<DetailResult>>();
beanParams2.put("detRes", detRes);
XLSTransformer former = new XLSTransformer();
former.transformXLS(srcFilePath, beanParams, destFilePath);
former.transformXLS(srcFilePath, beanParams2, destFilePath);
The lists look like this :
List<Results> rs = new ArrayList<Results>();
Results s1 = new Results(compteurFemme, compteurHomme, compteurTot, averageTempsFemme, averageTempsHomme, averageTempsTot);
rs.add(s1);
List<ResultsDetails> detRes = new ArrayList<ResultsDetails>();
for(int i=0; i<tableau.getRowCount(); i++){
ResultsDetails newRes = new ResultsDetails(item[i], rep[i], justefaux[i], tempsrep[i]);
item[i]=((DataIdGenre) tableau.getModel()).getValueAt(i, 2).toString();
rep[i]=((DataIdGenre) tableau.getModel()).getValueAt(i, 3).toString();
justefaux[i]=((DataIdGenre) tableau.getModel()).getValueAt(i, 4).toString();
tempsrep[i]=((DataIdGenre) tableau.getModel()).getValueAt(i, 5).toString();
detRes.add(newRes);
}
Individually, the two exports are working on the respective sheet, but together, the second erases the first one.
I then try to use some kind of multimap, with one key (the one I put in excel) for two values
Map<String, List<Object>> hm = new HashMap<String, List<Object>>();
List<Object> values = new ArrayList<Object>();
values.add(rs);
values.add(detRes);
hm.put("det", values);
XLSTransformer former = new XLSTransformer();
former.transformXLS(srcFilePath, hm, destFilePath);
But I got an error telling me that the datas were inaccessible.
So my question is, is there a way to deal directly with different sheets when using XLSTransformer ?
Ok, I've come up with something, through a temporary file :
private void exportDataDet(File file) throws ParseException, IOException, ParsePropertyException, InvalidFormatException {
List<ResultsDetails> detRes = generateResultsDetails();
try(InputStream is = IdGenre.class.getResourceAsStream("/xlsTemplates/IdGenre/IdGenre_20-29_+12.xlsx")) {
try (OutputStream os = new FileOutputStream("d:/IdGenreYOLO.xlsx")) {
Context context = new Context();
context.putVar("detRes", detRes);
JxlsHelper.getInstance().processTemplate(is, os, context);
}
}
}
private void exportData(File file) throws ParseException, IOException, ParsePropertyException, InvalidFormatException {
List<Results> rs = generateResults();
try{
String srcFilePath = "d:/IdGenreYOLO.xlsx";
String destFilePath = "d:/IdGenreRes.xlsx";
Map<String, List<Results>> beanParams = new HashMap<String, List<Results>>();
beanParams.put("rs", rs);
XLSTransformer former = new XLSTransformer();
former.transformXLS(srcFilePath, beanParams, destFilePath);
}
finally{
Path path = Paths.get("d:/IdGenreYOLO.xlsx");
try{
Files.delete(path);
}
finally{}
}
}
It's probably not the best solution, even more because I have to add other data that could not fit in the two existing lists, and - at least for now - it will be done through a second temp file.
I haven't use the same method twice, because XLSTransformer was the only one I was able to make operative within the excel template I was given (which I can't modify).
I'm open to any suggestion.
Related
I have a project where I want to load in a given shapefile, and pick out polygons above a certain size before writing the results to a new shapefile. Maybe not the most efficient, but I've got code that successfully does all of that, right up to the point where it is supposed to write the shapefile. I get no errors, but the resulting shapefile has no usable data in it. I've followed as many tutorials as possible, but still I'm coming up blank.
The first bit of code is where I read in a shapefile, pickout the polygons I want, and put then into a feature collection. This part seems to work fine as far as I can tell.
public class ShapefileTest {
public static void main(String[] args) throws MalformedURLException, IOException, FactoryException, MismatchedDimensionException, TransformException, SchemaException {
File oldShp = new File("Old.shp");
File newShp = new File("New.shp");
//Get data from the original ShapeFile
Map<String, Object> map = new HashMap<String, Object>();
map.put("url", oldShp.toURI().toURL());
//Connect to the dataStore
DataStore dataStore = DataStoreFinder.getDataStore(map);
//Get the typeName from the dataStore
String typeName = dataStore.getTypeNames()[0];
//Get the FeatureSource from the dataStore
FeatureSource<SimpleFeatureType, SimpleFeature> source = dataStore.getFeatureSource(typeName);
SimpleFeatureCollection collection = (SimpleFeatureCollection) source.getFeatures(); //Get all of the features - no filter
//Start creating the new Shapefile
final SimpleFeatureType TYPE = createFeatureType(); //Calls a method that builds the feature type - tested and works.
DefaultFeatureCollection newCollection = new DefaultFeatureCollection(); //To hold my new collection
try (FeatureIterator<SimpleFeature> features = collection.features()) {
while (features.hasNext()) {
SimpleFeature feature = features.next(); //Get next feature
SimpleFeatureBuilder fb = new SimpleFeatureBuilder(TYPE); //Create a new SimpleFeature based on the original
Integer level = (Integer) feature.getAttribute(1); //Get the level for this feature
MultiPolygon multiPoly = (MultiPolygon) feature.getDefaultGeometry(); //Get the geometry collection
//First count how many new polygons we will have
int numNewPoly = 0;
for (int i = 0; i < multiPoly.getNumGeometries(); i++) {
double area = getArea(multiPoly.getGeometryN(i));
if (area > 20200) {
numNewPoly++;
}
}
//Now build an array of the larger polygons
Polygon[] polys = new Polygon[numNewPoly]; //Array of new geometies
int iPoly = 0;
for (int i = 0; i < multiPoly.getNumGeometries(); i++) {
double area = getArea(multiPoly.getGeometryN(i));
if (area > 20200) { //Write the new data
polys[iPoly] = (Polygon) multiPoly.getGeometryN(i);
iPoly++;
}
}
GeometryFactory gf = new GeometryFactory(); //Create a geometry factory
MultiPolygon mp = new MultiPolygon(polys, gf); //Create the MultiPolygonyy
fb.add(mp); //Add the geometry collection to the feature builder
fb.add(level);
fb.add("dBA");
SimpleFeature newFeature = SimpleFeatureBuilder.build( TYPE, new Object[]{mp, level,"dBA"}, null );
newCollection.add(newFeature); //Add it to the collection
}
At this point I have a collection that looks right - it has the correct bounds and everything. The next bit if code is where I put it into a new Shapefile.
//Time to put together the new Shapefile
Map<String, Serializable> newMap = new HashMap<String, Serializable>();
newMap.put("url", newShp.toURI().toURL());
newMap.put("create spatial index", Boolean.TRUE);
DataStore newDataStore = DataStoreFinder.getDataStore(newMap);
newDataStore.createSchema(TYPE);
String newTypeName = newDataStore.getTypeNames()[0];
SimpleFeatureStore fs = (SimpleFeatureStore) newDataStore.getFeatureSource(newTypeName);
Transaction t = new DefaultTransaction("add");
fs.setTransaction(t);
fs.addFeatures(newCollection);
t.commit();
ReferencedEnvelope env = fs.getBounds();
}
}
I put in the very last code to check the bounds of the FeatureStore fs, and it comes back null. Obviously, loading the newly created shapefile (which DOES get created and is ab out the right size), nothing shows up.
The solution actually had nothing to do with the code I posted - it had everything to do with my FeatureType definition. I did not include the "the_geom" to my polygon feature type, so nothing was getting written to the file.
I believe you are missing the step to finalize/close the file. Try adding this after the the t.commit line.
fs.close();
As an expedient alternative, you might try out the Shapefile dumper utility mentioned in the Shapefile DataStores docs. Using that may simplify your second code block into two or three lines.
I've got problem with exporting large .xls with SXSSF, saying large I mean 27 cols x 100 000 rows. Excel file is return on endpoint request. I've limited amount of rows - it can be 3x larger.
I'm using template engine for inserting data.
Original code
public StreamingOutput createStreamedExcelReport(Map<String, Object> params, String templateName, String[] columnsToHide) throws Exception {
try(InputStream is = ReportGenerator.class.getResourceAsStream(templateName)) {
assert is != null;
final Transformer transformer = PoiTransformer.createTransformer(is);
AreaBuilder areaBuilder = new XlsCommentAreaBuilder(transformer);
List<Area> xlsAreaList = areaBuilder.build();
Area xlsArea = xlsAreaList.get(0);
Context context = new PoiContext();
for(Map.Entry<String, Object> entry : params.entrySet()) {
context.putVar(entry.getKey(), entry.getValue());
}
xlsArea.applyAt(new CellRef("Sheet1!A1"), context);
xlsArea.processFormulas();
return new StreamingOutput() {
#Override
public void write(OutputStream out) throws IOException {
((PoiTransformer) transformer).getWorkbook().write(out);
}
};
}
}
SXSSF
public StreamingOutput createStreamedExcelReport(Map<String, Object> params, String templateName, String[] columnsToHide) throws Exception {
try(InputStream is = ReportGenerator.class.getResourceAsStream(templateName)) {
assert is != null;
Workbook workbook = WorkbookFactory.create(is);
final PoiTransformer transformer = PoiTransformer.createSxssfTransformer(workbook);
AreaBuilder areaBuilder = new XlsCommentAreaBuilder(transformer);
List<Area> xlsAreaList = areaBuilder.build();
Area xlsArea = xlsAreaList.get(0);
Context context = new PoiContext();
for(Map.Entry<String, Object> entry : params.entrySet()) {
context.putVar(entry.getKey(), entry.getValue());
}
xlsArea.applyAt(new CellRef("Sheet1!A1"), context);
xlsArea.processFormulas();
return new StreamingOutput() {
#Override
public void write(OutputStream out) throws IOException {
transformer.getWorkbook().write(out);
}
};
}
}
Export was running for 7 mins and I stopped server - it was too long. Acceptable time would be something like 1 min (max. 2 min). Most of that time CPU usage was about 60-80% and memory usage was constant. I tried also exporting 40 rows - it took something like 10 sec.
Maybe my function needs to be optimized.
Additional problem is that I'm inserting functions. In original code functions are replaced with values. In SXSSF version they are not.
I recommend you to disable formulas processing at this point because the formulas support for SXSSF version is limited and the memory consumption can be too high. The formulas support may be improved in the future JXLS releases.
So just remove xlsArea.processFormulas() call and add
context.getConfig().setIsFormulaProcessingRequired(false);
to disable tracking of cell references (as shown in Jxls doc) and see if it works.
Also please note that the template and the final report are expected to be in .xlsx format if you use SXSSF.
Im using the FasterXML library to parse my CSV file. The CSV file has the column names in its first line. Unfortunately I need the columns to be renamed. I have a lambda function for this, where I can pass the red value from the csv file in and get the new value.
my code looks like this, but does not work.
CsvSchema csvSchema =CsvSchema.emptySchema().withHeader();
ArrayList<HashMap<String, String>> result = new ArrayList<HashMap<String, String>>();
MappingIterator<HashMap<String,String>> it = new CsvMapper().reader(HashMap.class)
.with(csvSchema )
.readValues(new File(fileName));
while (it.hasNext())
result.add(it.next());
System.out.println("changing the schema columns.");
for (int i=0; i < csvSchema.size();i++) {
String name = csvSchema.column(i).getName();
String newName = getNewName(name);
csvSchema.builder().renameColumn(i, newName);
}
csvSchema.rebuild();
when i try to print out the columns later, they are still the same as in the top line of my CSV file.
Additionally I noticed, that csvSchema.size() equals 0 - why?
You could instead use uniVocity-parsers for that. The following solution streams the input rows to the output so you don't need to load everything in memory to then write your data back with new headers. It will be much faster:
public static void main(String ... args) throws Exception{
Writer output = new StringWriter(); // use a FileWriter for your case
CsvWriterSettings writerSettings = new CsvWriterSettings(); //many options here - check the documentation
final CsvWriter writer = new CsvWriter(output, writerSettings);
CsvParserSettings parserSettings = new CsvParserSettings(); //many options here as well
parserSettings.setHeaderExtractionEnabled(true); // indicates the first row of the input are headers
parserSettings.setRowProcessor(new AbstractRowProcessor(){
public void processStarted(ParsingContext context) {
writer.writeHeaders("Column A", "Column B", "... etc");
}
public void rowProcessed(String[] row, ParsingContext context) {
writer.writeRow(row);
}
public void processEnded(ParsingContext context) {
writer.close();
}
});
CsvParser parser = new CsvParser(parserSettings);
Reader reader = new StringReader("A,B,C\n1,2,3\n4,5,6"); // use a FileReader for your case
parser.parse(reader); // all rows are parsed and submitted to the RowProcessor implementation of the parserSettings.
System.out.println(output.toString());
//nothing else to do. All resources are closed automatically in case of errors.
}
You can easily select the columns by using parserSettings.selectFields("B", "A") in case you want to reorder/eliminate columns.
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).
I am new to JasperReports. I can create a simple PDF document with Javabean datasource. In my project I have created two separate pdf document with separate javabean datasource. Now I want to merge both documents into a single document. Can anyone tell me how to merge both documents into single document using JasperReports?
unfortunately the solution is build a sub report and use the 2 different DataSource or what ever connection you used
but there is an easy way to get over with this question :D
just simple no new reports ..... VoilĂ
ok lets do it
JasperPrint jp1 = JasperFillManager.fillReport(url.openStream(), parameters,
new JRBeanCollectionDataSource(inspBean));
JasperPrint jp2 = JasperFillManager.fillReport(url.openStream(), parameters,
new JRBeanCollectionDataSource(inspBean));
ok we have over 2 records ..lets take our first record jp1 and add jp2 content into it
List pages = jp2 .getPages();
for (int j = 0; j < pages.size(); j++) {
JRPrintPage object = (JRPrintPage)pages.get(j);
jp1.addPage(object);
}
JasperViewer.viewReport(jp1,false);
This work like a charm .. with couple of loops you can merge any number of report together .. without creating new reports
http://lnhomez.blogspot.com/2011/11/merge-multiple-jasper-reports-in-to.html
You can use subreports for this. You dont have to recreate your current reports.
Create a master report, with 0 margins. Add all your reports to this as subreport and put condition that if datasource is available for this, only then print this report.
Now put all your individual datasources into one map data source and pass this datasource to master report.
Configute all subreports to the key in the map.
You can use use a list of JasperPrint like this:
List<JasperPrint> jasperPrintList = new ArrayList<JasperPrint>();
jasperPrintList.add(JasperFillManager.fillReport("Report_file1.jasper", getReportMap(1), new JREmptyDataSource()));
jasperPrintList.add(JasperFillManager.fillReport("Report_file2.jasper", getReportMap(2), new JREmptyDataSource()));
jasperPrintList.add(JasperFillManager.fillReport("Report_file3.jasper", getReportMap(3), new JREmptyDataSource()));
JRPdfExporter exporter = new JRPdfExporter();
exporter.setExporterInput(SimpleExporterInput.getInstance(jasperPrintList));
exporter.setExporterOutput(new SimpleOutputStreamExporterOutput("Report_PDF.pdf"));
SimplePdfExporterConfiguration configuration = new SimplePdfExporterConfiguration();
configuration.setCreatingBatchModeBookmarks(true);
exporter.setConfiguration(configuration);
exporter.exportReport();
Multiple Pages in one JasperPrint
Sample Code:
DefaultTableModel dtm = new DefaultTableModel(new Object[0][3], new String[]{"Id","Name","Family"});
String[] fields= new String[3];
boolean firstFlag=true;
JasperPrint jp1 =null;
JasperPrint jp2 =null;
for (int i=0 ; i<=pagesCount ; i++)
{
fields[0]= "id";
fields[1]= "name";
fields[2]= "family";
dtm.insertRow(0, fields);
try
{
Map<String, Object> params = new HashMap<String, Object>();
if (firstFlag)
{
jp1 = JasperFillManager.fillReport(getClass().getResourceAsStream(reportsource), params, new JRTableModelDataSource(dtm));
firstFlag=false;
}else
{
jp2 = JasperFillManager.fillReport(getClass().getResourceAsStream(reportsource), params, new JRTableModelDataSource(dtm));
jp1.addPage(jp2.getPages().get(0));
}
}catch (Exception e)
{
System.out.println(e.fillInStackTrace().getMessage());
}
}
JasperViewer.viewReport(jp1,false);
Lahiru Nirmal's answer was simple and to the point. Here's a somewhat expanded version that also copies styles and other things (not all of which I'm positive are crucial).
Note that all of the pages are of the same size.
public static JasperPrint createJasperReport(String name, JasperPrint pattern)
{
JasperPrint newPrint = new JasperPrint();
newPrint.setBottomMargin(pattern.getBottomMargin());
newPrint.setLeftMargin(pattern.getLeftMargin());
newPrint.setTopMargin(pattern.getTopMargin());
newPrint.setRightMargin(pattern.getRightMargin());
newPrint.setLocaleCode(pattern.getLocaleCode());
newPrint.setName(name);
newPrint.setOrientation(pattern.getOrientationValue());
newPrint.setPageHeight(pattern.getPageHeight());
newPrint.setPageWidth(pattern.getPageWidth());
newPrint.setTimeZoneId(pattern.getTimeZoneId());
return newPrint;
}
public static void addJasperPrint(JasperPrint base, JasperPrint add)
{
for (JRStyle style : add.getStyles())
{
String styleName = style.getName();
if (!base.getStylesMap().containsKey(styleName))
{
try
{
base.addStyle(style);
}
catch (JRException e)
{
logger.log(Level.WARNING, "Couldn't add a style", e);
}
}
else
logger.log(Level.FINE, "Dropping duplicate style: " + styleName);
}
for (JRPrintPage page : add.getPages())
base.addPage(page);
if (add.hasProperties())
{
JRPropertiesMap propMap = add.getPropertiesMap();
for (String propName : propMap.getPropertyNames())
{
String propValue = propMap.getProperty(propName);
base.setProperty(propName, propValue);
}
}
if (add.hasParts())
{
PrintParts parts = add.getParts();
Iterator<Entry<Integer, PrintPart>> partsIterator = parts.partsIterator();
while (partsIterator.hasNext())
{
Entry<Integer, PrintPart> partsEntry = partsIterator.next();
base.addPart(partsEntry.getKey(), partsEntry.getValue());
}
}
List<PrintBookmark> bookmarks = add.getBookmarks();
if (bookmarks != null)
for (PrintBookmark bookmark : bookmarks)
base.addBookmark(bookmark);
}
Then, to use it:
JasperPrint combinedPrint = createJasperReport("Multiple Reports",
print1);
for (JasperPrint addPrint : new JasperPrint[] { print1, print2, print3 })
addJasperPrint(combinedPrint, addPrint);
// Now do whatever it was you'd do with the JasperPrint.
String combinedXml = JasperExportManager.exportReportToXml(combinedPrint);
I see JasperReports now has a newer "Report Book" feature that might be a better solution, but I haven't used one yet.
Have tried to search for this almost 'everywhere', but couldn't find a pointer as to how to implement this. Please kindly review my code and offer suggestions on how to set/update ALL documents properties in SharePoint using OpenCMIS. Have created the documents successfully using CMIS, however I'm not able to populate different values for different documents.
For example, a.pdf, b.pdf have different properties. So when I update them, i expect the value to be mapped from array of values assigned to them but at the moment, the same value are being append to all the documents...
Please see my code below, hopefully it will make things clearer:
try {
String [] nextLine =null;
CSVReader reader = new CSVReader(new FileReader(indexFileLocation));
List content = reader.readAll();
for (Object o : content) {
nextLine = (String[]) o;
System.out.println("\n"+ nextLine[2] + "\n"+nextLine[7] + "\n"+ nextLine[6]);
}
//reader.close();
Map <String, Object> newDocProps = new HashMap<String, Object>();
newDocProps.put(PropertyIds.OBJECT_TYPE_ID, "cmis:document");
newDocProps.put(PropertyIds.NAME, ff.getName());
Document doc = newFolder.createDocument(newDocProps, contentStream, VersioningState.NONE);
CmisObject cmisobject = (Document) session.getObject(doc.getId());
Map<String, Object> pp = new HashMap<String, Object>();
//pp.put(PropertyIds.OBJECT_ID, "Name");
pp.put("WorkflowNumber", nextLine[7]);
pp.put("InvoiceDate", nextLine[2]);
cmisobject.updateProperties(pp);
Any help is appreciated.
#Albert, How are you creating session? It could be an issue with session creation. Please paste your code here to create session.